Setting Parameters for Biological Models With ANIMO
Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran
2014-01-01
ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions
Models for setting ATM parameter values
DEFF Research Database (Denmark)
Blaabjerg, Søren; Gravey, A.; Romæuf, L.
1996-01-01
In ATM networks, a user should negotiate at connection set-up a traffic contract which includes traffic characteristics and requested QoS. The traffic characteristics currently considered are the Peak Cell Rate, the Sustainable Cell Rate, the Intrinsic Burst Tolerance and the Cell Delay Variation...... to Network Interface (UNI) and at subsequent Inter Carrier Interfaces (ICIs), by algorithmic rules based on the Generic Cell Rate Algorithm (GCRA) formalism. Conformance rules are implemented by policing mechanisms that control the traffic submitted by the user and discard excess traffic. It is therefore...
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory stu
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
Directory of Open Access Journals (Sweden)
Liu Gang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
A new LPV modeling approach using PCA-based parameter set mapping to design a PSS.
Jabali, Mohammad B Abolhasani; Kazemi, Mohammad H
2017-01-01
This paper presents a new methodology for the modeling and control of power systems based on an uncertain polytopic linear parameter-varying (LPV) approach using parameter set mapping with principle component analysis (PCA). An LPV representation of the power system dynamics is generated by linearization of its differential-algebraic equations about the transient operating points for some given specific faults containing the system nonlinear properties. The time response of the output signal in the transient state plays the role of the scheduling signal that is used to construct the LPV model. A set of sample points of the dynamic response is formed to generate an initial LPV model. PCA-based parameter set mapping is used to reduce the number of models and generate a reduced LPV model. This model is used to design a robust pole placement controller to assign the poles of the power system in a linear matrix inequality (LMI) region, such that the response of the power system has a proper damping ratio for all of the different oscillation modes. The proposed scheme is applied to controller synthesis of a power system stabilizer, and its performance is compared with a tuned standard conventional PSS using nonlinear simulation of a multi-machine power network. The results under various conditions show the robust performance of the proposed controller.
Directory of Open Access Journals (Sweden)
M. F. Loutre
2011-05-01
Full Text Available Many sources of uncertainty limit the accuracy of climate projections. Among them, we focus here on the parameter uncertainty, i.e. the imperfect knowledge of the values of many physical parameters in a climate model. Therefore, we use LOVECLIM, a global three-dimensional Earth system model of intermediate complexity and vary several parameters within a range based on the expert judgement of model developers. Nine climatic parameter sets and three carbon cycle parameter sets are selected because they yield present-day climate simulations coherent with observations and they cover a wide range of climate responses to doubled atmospheric CO_{2} concentration and freshwater flux perturbation in the North Atlantic. Moreover, they also lead to a large range of atmospheric CO_{2} concentrations in response to prescribed emissions. Consequently, we have at our disposal 27 alternative versions of LOVECLIM (each corresponding to one parameter set that provide very different responses to some climate forcings. The 27 model versions are then used to illustrate the range of responses provided over the recent past, to compare the time evolution of climate variables over the time interval for which they are available (the last few decades up to more than one century and to identify the outliers and the "best" versions over that particular time span. For example, between 1979 and 2005, the simulated global annual mean surface temperature increase ranges from 0.24 °C to 0.64 °C, while the simulated increase in atmospheric CO_{2} concentration varies between 40 and 50 ppmv. Measurements over the same period indicate an increase in global annual mean surface temperature of 0.45 °C (Brohan et al., 2006 and an increase in atmospheric CO_{2} concentration of 44 ppmv (Enting et al., 1994; GLOBALVIEW-CO2, 2006. Only a few parameter sets yield simulations that reproduce the observed key variables of the climate system over the last
Forecasting with the Fokker-Planck model: Bayesian setting of parameter
Montagnon, Chris
2017-04-01
Using a closed solution to a Fokker-Planck model of a time series, a probability distribution for the next point in the time series is developed. This probability distribution has one free parameter. Various Bayesian approaches to setting this parameter are tested by forecasting some real world time series. Results show a more than 25 % reduction in the ' 95 % point' of the probability distribution (the safety stock required in these real world situations), versus the conventional ARMA approach, without a significant increase in actuals exceeding this level.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
1999-01-01
We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.
Null subjects: a problem for parameter-setting models of language acquisition.
Valian, V
1990-05-01
Some languages, like English, require overt surface subjects, while others, like Italian and Spanish, allow "null" subjects. How does the young child determine whether or not her language allows null subjects? Modern parameter-setting theory has proposed a solution, in which the child begins acquisition with the null subject parameter set for either the English-like value or the Italian-like value. Incoming data, or the absence thereof, force a resetting of the parameter if the original value was incorrect. This paper argues that the single-value solution cannot work, no matter which value is chosen as the initial one, because of inherent limitations in the child's parser, and because of the presence of misleading input. An alternative dual-value solution is proposed, in which the child begins acquisition with both values available, and uses theory-confirmation procedures to decide which value is best supported by the available data.
Waller, Niels G; Feuerstahler, Leah
2017-03-17
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The M...
Minimum QOS Parameter Set in Transport Layer
Institute of Scientific and Technical Information of China (English)
汪芸; 顾冠群
1997-01-01
QOS（Quality Of Service)parameter definitions are the basis of further QOS control.But QOS parameters defined by organizations such as ISO and ITU are incoherent and incompatible.It leads to the imefficiency of QOS controls.Based on the analysis of QOS parameters defined by ISO and ITU,this paper first promotes Minimum QOS Parameter Set in transport layer.It demonstrates that the parameters defined by ISO and ITU can be represented b parameters or a combination of parameters of the Set.The paper also expounds that the Set is open and manageable and it can be the potential unified base for QOS parameters.
Energy Technology Data Exchange (ETDEWEB)
Bunting, Bruce G [ORNL
2012-10-01
The automotive and engine industries are in a period of very rapid change being driven by new emission standards, new types of after treatment, new combustion strategies, the introduction of new fuels, and drive for increased fuel economy and efficiency. The rapid pace of these changes has put more pressure on the need for modeling of engine combustion and performance, in order to shorten product design and introduction cycles. New combustion strategies include homogeneous charge compression ignition (HCCI), partial-premixed combustion compression ignition (PCCI), and dilute low temperature combustion which are being developed for lower emissions and improved fuel economy. New fuels include bio-fuels such as ethanol or bio-diesel, drop-in bio-derived fuels and those derived from new crude oil sources such as gas-to-liquids, coal-to-liquids, oil sands, oil shale, and wet natural gas. Kinetic modeling of the combustion process for these new combustion regimes and fuels is necessary in order to allow modeling and performance assessment for engine design purposes. In this research covered by this CRADA, ORNL developed and supplied experimental data related to engine performance with new fuels and new combustion strategies along with interpretation and analysis of such data and consulting to Reaction Design, Inc. (RD). RD performed additional analysis of this data in order to extract important parameters and to confirm engine and kinetic models. The data generated was generally published to make it available to the engine and automotive design communities and also to the Reaction Design Model Fuels Consortium (MFC).
Tillman, Fred D.; Weaver, James W.
Migration of volatile chemicals from the subsurface into overlying buildings is known as vapor intrusion (VI). Under certain circumstances, people living in homes above contaminated soil or ground water may be exposed to harmful levels of these vapors. A popular VI screening-level algorithm widely used in the United States, Canada and the UK to assess this potential risk is the "Johnson and Ettinger" (J&E) model. Concern exists over using the J&E model for deciding whether or not further action is necessary at sites, as many parameters are not routinely measured (or are un-measurable). Using EPA-recommended ranges of parameter values for nine soil-type/source depth combinations, input parameter sets were identified that correspond to bounding results of the J&E model. The results established the existence of generic upper and lower bound parameter sets for maximum and minimum exposure for all soil types and depths investigated. Using the generic upper and lower bound parameter sets, an analysis can be performed that, given the limitations of the input ranges and the model, bounds the attenuation factor in a VI investigation.
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Setting parameters in the cold chain
Directory of Open Access Journals (Sweden)
Victoria Rodríguez
2011-12-01
Full Text Available Breaks in the cold chain are important economic losses in food and pharmaceutical companies. Many of the failures in the cold chain are due to improper adjustment of equipment parameters such as setting the parameters for theoretical conditions, without a corresponding check in normal operation. The companies that transport refrigeratedproducts must be able to adjust the parameters of the equipment in an easy and quick to adapt their functioning to changing environmental conditions. This article presents the results of a study carried out with a food distribution company. The main objective of the study is to verify the effectiveness of Six Sigma as a methodological toolto adjust the equipment in the cold chain. The second objective is more speciÞ c and is to study the impact of: reducing the volume of storage in the truck, the initial temperature of the storage areain the truck and the frequency of defrost in the transport of refrigerated products.
Directory of Open Access Journals (Sweden)
Jinshui Zhang
2017-04-01
Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C) and kernel width (s), in mapping homogeneous specific land cover.
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Liingaard, Morten
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. The lumped-parameter model development have been reported by (Wolf 1991b; Wolf 1991a; Wolf and Paronesso 1991; Wolf and Paronesso 19...
A New Parameter Set for the Relativistic Mean Field Theory
Nerlo-Pomorska, B; Nerlo-Pomorska, Bozena; Sykut, Joanna
2004-01-01
Subtracting the Strutinsky shell corrections from the selfconsistent energies obtained within the Relativistic Mean Field Theory (RMFT) we have got estimates for the macroscopic part of the binding energies of 142 spherical even-even nuclei. By minimizing their root mean square deviations from the values obtained with the Lublin-Srasbourg Drop (LSD) model with respect to the nine RMFT parameters we have found the optimal set (NL4). The new parameters reproduce also the radii of these nuclei with an accuracy comparable with that obtained with the NL1 and NL3 sets.
Photovoltaic module parameters acquisition model
Cibira, Gabriel; Koščová, Marcela
2014-09-01
This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.
Response model parameter linking
Barrett, Michelle Derbenwick
2015-01-01
With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require
Distributed Parameter Modelling Applications
DEFF Research Database (Denmark)
2011-01-01
Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...
Energy Technology Data Exchange (ETDEWEB)
Oubeidillah, Abdoul A [ORNL; Kao, Shih-Chieh [ORNL; Ashfaq, Moetasim [ORNL; Naz, Bibi S [ORNL; Tootle, Glenn [University of Alabama, Tuscaloosa
2014-01-01
To extend geographical coverage, refine spatial resolution, and improve modeling efficiency, a computation- and data-intensive effort was conducted to organize a comprehensive hydrologic dataset with post-calibrated model parameters for hydro-climate impact assessment. Several key inputs for hydrologic simulation including meteorologic forcings, soil, land class, vegetation, and elevation were collected from multiple best-available data sources and organized for 2107 hydrologic subbasins (8-digit hydrologic units, HUC8s) in the conterminous United States at refined 1/24 (~4 km) spatial resolution. Using high-performance computing for intensive model calibration, a high-resolution parameter dataset was prepared for the macro-scale Variable Infiltration Capacity (VIC) hydrologic model. The VIC simulation was driven by DAYMET daily meteorological forcing and was calibrated against USGS WaterWatch monthly runoff observations for each HUC8. The results showed that this new parameter dataset may help reasonably simulate runoff at most US HUC8 subbasins. Based on this exhaustive calibration effort, it is now possible to accurately estimate the resources required for further model improvement across the entire conterminous United States. We anticipate that through this hydrologic parameter dataset, the repeated effort of fundamental data processing can be lessened, so that research efforts can emphasize the more challenging task of assessing climate change impacts. The pre-organized model parameter dataset will be provided to interested parties to support further hydro-climate impact assessment.
Garcia, F.; Mesa, J.; Arruda-Neto, J. D. T.; Helene, O.; Vanin, V.; Milian, F.; Deppman, A.; Rodrigues, T. E.; Rodriguez, O.
2007-03-01
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo simulation procedure. Program summaryTitle of program:STATFLUX Catalogue identifier:ADYS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it has been tested:Micro-computer with Intel Pentium III, 3.0 GHz Installation:Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, Brazil Operating system:Windows 2000 and Windows XP Programming language used:Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program. Memory required to execute with typical data:8 Mbytes of RAM memory and 100 MB of Hard disk memory No. of bits in a word:16 No. of lines in distributed program, including test data, etc.:6912 No. of bytes in distributed program, including test data, etc.:229 541 Distribution format:tar.gz Nature of the physical problem:The investigation of transport mechanisms for
Parameter Symmetry of the Interacting Boson Model
Shirokov, A M; Smirnov, Yu F; Shirokov, Andrey M.; Smirnov, Yu. F.
1998-01-01
We discuss the symmetry of the parameter space of the interacting boson model (IBM). It is shown that for any set of the IBM Hamiltonian parameters (with the only exception of the U(5) dynamical symmetry limit) one can always find another set that generates the equivalent spectrum. We discuss the origin of the symmetry and its relevance for physical applications.
Zhang, Ningyi; Li, Gang; Yu, Shanxiang; An, Dongsheng; Sun, Qian; Luo, Weihong; Yin, Xinyou
2017-01-01
Accurately predicting photosynthesis in response to water and nitrogen stress is the first step toward predicting crop growth, yield and many quality traits under fluctuating environmental conditions. While mechanistic models are capable of predicting photosynthesis under fluctuating environmental conditions, simplifying the parameterization procedure is important toward a wide range of model applications. In this study, the biochemical photosynthesis model of Farquhar, von Caemmerer and Berry (the FvCB model) and the stomatal conductance model of Ball, Woodrow and Berry which was revised by Leuning and Yin (the BWB-Leuning-Yin model) were parameterized for Lilium (L. auratum × speciosum “Sorbonne”) grown under different water and nitrogen conditions. Linear relationships were found between biochemical parameters of the FvCB model and leaf nitrogen content per unit leaf area (Na), and between mesophyll conductance and Na under different water and nitrogen conditions. By incorporating these Na-dependent linear relationships, the FvCB model was able to predict the net photosynthetic rate (An) in response to all water and nitrogen conditions. In contrast, stomatal conductance (gs) can be accurately predicted if parameters in the BWB-Leuning-Yin model were adjusted specifically to water conditions; otherwise gs was underestimated by 9% under well-watered conditions and was overestimated by 13% under water-deficit conditions. However, the 13% overestimation of gs under water-deficit conditions led to only 9% overestimation of An by the coupled FvCB and BWB-Leuning-Yin model whereas the 9% underestimation of gs under well-watered conditions affected little the prediction of An. Our results indicate that to accurately predict An and gs under different water and nitrogen conditions, only a few parameters in the BWB-Leuning-Yin model need to be adjusted according to water conditions whereas all other parameters are either conservative or can be adjusted according to
On Relative Accessibility Depending on a Set of Parameters
Institute of Scientific and Technical Information of China (English)
张明义
1989-01-01
Ewa Orlowska[1,2] presented a class of relationsd-accessibiltiy relations determined by a set of parameters.These parameters are a formal counterpart of properties or characteristics of states which are associated by means of accessibility relation.The author has shown how properties of sets of parameters in fluence properties of accessibility relations and given interesting results.In this paper,some further algebraic properties of relative accessibility are discussed.We also give a necessary and sufficient condition,under which the set of accessibility relations({R (p)}pPAR forms a lattice by means of usual set operations.
On Relative Accessibility Depending on a Set of Parameters
Institute of Scientific and Technical Information of China (English)
张明义
1989-01-01
Ewa Orlowska presented a class of relations-accessibility relatios determined by a set of parameters. These parameters are a formal counterpart of properties or characteristies of states which are associated by meaas of accessibility relation The author bas showu how properties of sets of parameters influence properties of accessibility relations and give in teresting results. In this paper, some fiurther algebraic properties of relative accessibility are diseussed. We also give a necessary and snfticient condition, under which the set of accessibility relations {R(P)} p包含于PAR forms a lattice by means of usual set operations.
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
The structure of boundary parameter property satisfying sets
Whale, B E
2010-01-01
Precise definitions of singularities in General Relativity rely on a set of curves. Many boundary constructions force a particular set of curves by virtue of the construction. The abstract boundary, however, allows the set of curves to be chosen. This set, therefore, plays a very important role in the use of the abstract boundary as the definition of a singularity or point at infinity depends on it. The sets of curves used in the abstract boundary must satisfy the boundary parameter property. This property obfuscates the construction of and relationships between these sets of curves. In this paper we lay the ground work for an analysis of these sets of curves by showing that they are in one-to-one correspondence with certain sets of inextendible curves. As an application of this result we show how the usual set operations can be extended to boundary parameter property satisfying sets of curves, allowing for their comparison. These results provide an interpretation of what information boundary parameter proper...
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
TRAPEZOIDAL PLATE BENDING ELEMENT WITH DOUBLE SET PARAMETERS
Institute of Scientific and Technical Information of China (English)
Shao-chun Chen; Dong-yang Shi; I chiro Hagiwara
2003-01-01
Using double set parameter method, a 12-parameter trapezoidal plate bending element is presented. The first set of degrees of freedom, which make the element convergent, are the values at the four vertices and the middle points of the four sides together with the mean values of the outer normal derivatives along four sides. The second set of degree of freedom, which make the number of unknowns in the resulting discrete system small and computation convenient are values and the first derivatives at the four vertices of the element. The convergence of the element is proved.
An Improved Attention Parameter Setting Algorithm Based on Award Learning Mechanism
Institute of Scientific and Technical Information of China (English)
Fang Xiuduan; Liu Binhan; Wang Weizhi
2002-01-01
The setting of attention parameters plays a role in the performance of synergetic neural network based on PFAP model. This paper first analyzes the attention parameter setting algorithm based on award-penalty learning mechanism. Then, it presents an improved algorithm to overcome its drawbacks. The experimental results demonstrate that the novel algorithm is better than the original one under the same circumstances.
Analytical one parameter method for PID motion controller settings
Dijk, van J.; Aarts, R.G.K.M.
2012-01-01
In this paper analytical expressions for PID-controllers settings for electromechanical motion systems are presented. It will be shown that by an adequate frequency domain oriented parametrization, the parameters of a PID-controller are analytically dependent on one variable only, the cross-over fre
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
Directed Subset Feedback Vertex Set is Fixed-Parameter Tractable
Chitnis, Rajesh; Hajiaghayi, MohammadTaghi; Marx, Dániel
2012-01-01
Given a graph $G$ and an integer $k$, the \\textsc{Feedback Vertex Set} (\\textsc{FVS}) problem asks if there is a vertex set $T$ of size at most $k$ that hits all cycles in the graph. Bodlaender (WG '91) gave the first fixed-parameter algorithm for \\textsc{FVS} in undirected graphs. The fixed-parameter tractability status of \\textsc{FVS} in directed graphs was a long-standing open problem until Chen et al. (STOC '08) showed that it is fixed-parameter tractable by giving an $4^{k}k!n^{O(1)}$ algorithm. In the subset versions of this problems, we are given an additional subset $S$ of vertices (resp. edges) and we want to hit all cycles passing through a vertex of $S$ (resp. an edge of $S$). Indeed both the edge and vertex versions are known to be equivalent in the parameterized sense. Recently the \\textsc{Subset Feedback Vertex Set} in undirected graphs was shown to be FPT by Cygan et al. (ICALP '11) and Kakimura et al. (SODA '12). We generalize the result of Chen et al. (STOC '08) by showing that \\textsc{Subset...
The Study of the Optimal Parameter Settings in a Hospital Supply Chain System in Taiwan
Directory of Open Access Journals (Sweden)
Hung-Chang Liao
2014-01-01
Full Text Available This study proposed the optimal parameter settings for the hospital supply chain system (HSCS when either the total system cost (TSC or patient safety level (PSL (or both simultaneously was considered as the measure of the HSCS’s performance. Four parameters were considered in the HSCS: safety stock, maximum inventory level, transportation capacity, and the reliability of the HSCS. A full-factor experimental design was used to simulate an HSCS for the purpose of collecting data. The response surface method (RSM was used to construct the regression model, and a genetic algorithm (GA was applied to obtain the optimal parameter settings for the HSCS. The results show that the best method of obtaining the optimal parameter settings for the HSCS is the simultaneous consideration of both the TSC and the PSL to measure performance. Also, the results of sensitivity analysis based on the optimal parameter settings were used to derive adjustable strategies for the decision-makers.
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
Just, Susann; Sievert, Frank; Thommes, Markus; Breitkreutz, Jörg
2013-11-01
Hot-melt extrusion is gaining importance for the production of amorphous solid solutions; in parallel, predictive tools for estimating drug solubility in polymers are increasingly demanded. The Hansen solubility parameter (SP) approach is well acknowledged for its predictive power of the miscibility of liquids as well as the solubility of some amorphous solids in liquid solvents. By solely using the molecular structure, group contribution (GC) methods allow the calculation of Hansen SPs. The GC parameter sets available were derived from liquids and polymers which conflicts with the object of prediction, the solubility of solid drugs. The present study takes a step from the liquid based SPs toward their application to solid solutes. On the basis of published experimental Hansen SPs of solid drugs and excipients only, a new GC parameter set was developed. In comparison with established parameter sets by van Krevelen/Hoftyzer, Beerbower/Hansen, Breitkreutz and Stefanis/Panayiotou, the new GC parameter set provides the highest overall predictive power for solubility experiments (correlation coefficient r = -0.87 to -0.91) as well as for literature data on melt extrudates and casted films (r = -0.78 to -0.96).
A market model: uncertainty and reachable sets
Directory of Open Access Journals (Sweden)
Raczynski Stanislaw
2015-01-01
Full Text Available Uncertain parameters are always present in models that include human factor. In marketing the uncertain consumer behavior makes it difficult to predict the future events and elaborate good marketing strategies. Sometimes uncertainty is being modeled using stochastic variables. Our approach is quite different. The dynamic market with uncertain parameters is treated using differential inclusions, which permits to determine the corresponding reachable sets. This is not a statistical analysis. We are looking for solutions to the differential inclusions. The purpose of the research is to find the way to obtain and visualise the reachable sets, in order to know the limits for the important marketing variables. The modeling method consists in defining the differential inclusion and find its solution, using the differential inclusion solver developed by the author. As the result we obtain images of the reachable sets where the main control parameter is the share of investment, being a part of the revenue. As an additional result we also can define the optimal investment strategy. The conclusion is that the differential inclusion solver can be a useful tool in market model analysis.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Parallel axes gear set optimization in two-parameter space
Theberge, Y.; Cardou, A.; Cloutier, L.
1991-05-01
This paper presents a method for optimal spur and helical gear transmission design that may be used in a computer aided design (CAD) approach. The design objective is generally taken as obtaining the most compact set for a given power input and gear ratio. A mixed design procedure is employed which relies both on heuristic considerations and computer capabilities. Strength and kinematic constraints are considered in order to define the domain of feasible designs. Constraints allowed include: pinion tooth bending strength, gear tooth bending strength, surface stress (resistance to pitting), scoring resistance, pinion involute interference, gear involute interference, minimum pinion tooth thickness, minimum gear tooth thickness, and profile or transverse contact ratio. A computer program was developed which allows the user to input the problem parameters, to select the calculation procedure, to see constraint curves in graphic display, to have an objective function level curve drawn through the design space, to point at a feasible design point and to have constraint values calculated at that point. The user can also modify some of the parameters during the design process.
Institute of Scientific and Technical Information of China (English)
夏卫明; 骆桂林; 嵇宽斌
2013-01-01
基于ANSYS有限元软件,针对轴孔过盈配合接触模型中的平面应力轴对称模型与实体模型和横截面平面应力模型有限元计算的差异,着重研究了平面应力轴对称接触模型参数设置方法,主要研究了轴孔配合模型的几何过盈和间隙、接触单元实常数CNOF以及接触单元关键字KEYOPT (9)之间的相互作用关系,为读者在遇到类同的问题时提供参考.%Based on ANSYS,according to the FEA calculation results differences about shaft and hole interference fitting model between plane stress axisymmetric model to solid model and cross section plane stress model,the study on parameters settings in plane stress axisymmetric contact model was focused on.The interactions of geometric interference and gap,contact element real constant CNOF and contact element key option KEYOPT (9) of shaft and hole contact fitting model were studied.It provides reference for the readers in the similar problems.
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Roe, Byron
2013-01-01
The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.
Drusano, G L; Liu, W; Perkins, R; Madu, A; Madu, C; Mayers, M; Miller, M H
1995-08-01
Robust determination of the concentration-time profile of anti-infective agents in certain specialized compartments is often limited by the inability to obtain more than a single sample from such a site in any one subject. Vitreous humor and cerebrospinal fluid are obvious examples for which the determination of concentrations of anti-infective agents is limited. Advances in pharmacodynamics have pointed out the importance of understanding the profiles of drugs in the plasma and in specialized compartments in order to dose the drugs to obtain the best patient outcomes. Advances in population pharmacokinetic modeling hold the promise of allowing proper estimation of drug penetration into the vitreous (or other specialized compartment) with only a single vitreous sample, in conjunction with plasma sampling. We have developed a rabbit model which allows multiple samples of vitreous to be obtained without breaking down the blood-vitreous barrier. We have employed this model to test the hypothesis that robust estimates of vitreous penetration by the fluoroquinolone ciprofloxacin can be obtained from a traditional intensive plasma sampling set plus a single vitreous sample. We studied 33 rabbits which were receiving 40 mg of ciprofloxacin per kg of body weight intravenously as short infusions and from which multiple plasma and vitreous samples were obtained and assayed for ciprofloxacin content by high-performance liquid chromatography. Data were analyzed by the iterative two-stage population modeling technique (IT2S), employing the iterative two-stage program of Forrest et al. (Antimicrob. Agents Chemother. 37:1065-1072, 1993). Two data sets were analyzed: all plasma and vitreous samples versus all plasma samples and the initially obtained single vitreous sample. The pharmacokinetic parameter values identified were used to calculate the percent vitreous penetration as the ratio of the area under the concentration-time curve for the vitreous to that for the plasma. The
An Optimization Model of Tunnel Support Parameters
Directory of Open Access Journals (Sweden)
Su Lijuan
2015-05-01
Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Li, Zhen; Karniadakis, George
2016-01-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....
Parameter optimization model in electrical discharge machining process
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.
Spatial occupancy models for large data sets
Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.
2013-01-01
Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.
Exploring the interdependencies between parameters in a material model.
Energy Technology Data Exchange (ETDEWEB)
Silling, Stewart Andrew; Fermen-Coker, Muge
2014-01-01
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Transmission, Acquisition, Parameter-Setting, Reanalysis, and Language Change
Mufwene, Salikoko S.
2011-01-01
Jurgen Meisel's (JM) article is literally thought-provoking, especially for the issues that one can raise out of the central position that he develops, viz., "although bilingual acquisition in situations of language contact can be argued to be of significant importance for explanations of grammatical change, reanalysis affecting parameter settings…
Emergence and spread of antibiotic resistance: setting a parameter space
Martínez, José Luis; Baquero, Fernando
2014-01-01
The emergence and spread of antibiotic resistance among human pathogens is a relevant problem for human health and one of the few evolution processes amenable to experimental studies. In the present review, we discuss some basic aspects of antibiotic resistance, including mechanisms of resistance, origin of resistance genes, and bottlenecks that modulate the acquisition and spread of antibiotic resistance among human pathogens. In addition, we analyse several parameters that modulate the evol...
Emergence and spread of antibiotic resistance: setting a parameter space.
Martínez, José Luis; Baquero, Fernando
2014-05-01
The emergence and spread of antibiotic resistance among human pathogens is a relevant problem for human health and one of the few evolution processes amenable to experimental studies. In the present review, we discuss some basic aspects of antibiotic resistance, including mechanisms of resistance, origin of resistance genes, and bottlenecks that modulate the acquisition and spread of antibiotic resistance among human pathogens. In addition, we analyse several parameters that modulate the evolution landscape of antibiotic resistance. Learning why some resistance mechanisms emerge but do not evolve after a first burst, whereas others can spread over the entire world very rapidly, mimicking a chain reaction, is important for predicting the evolution, and relevance for human health, of a given mechanism of resistance. Because of this, we propose that the emergence and spread of antibiotic resistance can only be understood in a multi-parameter space. Measuring the effect on antibiotic resistance of parameters such as contact rates, transfer rates, integration rates, replication rates, diversification rates, and selection rates, for different genes and organisms, growing under different conditions in distinct ecosystems, will allow for a better prediction of antibiotic resistance and possibilities of focused interventions.
Parameter redundancy in discrete state‐space and integrated models
McCrea, Rachel S.
2016-01-01
Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
A SET OF 12-PARAMETER RECTANGULAR PLATE ELEMENT WITH HIGH ACCURACY
Institute of Scientific and Technical Information of China (English)
ChenShaochun; LuoLaixing
1999-01-01
Abstract. Using the method of undetermined function, a set of 12 parameter rectangular p|atedement with doub[e set parameter and geometry symmetry is constructed. Their consistencyerror are O(h2) , one order higher than the usua[ 12 parameter rectangu|ar p[ate elements.
Parameter optimization in S-system models
Directory of Open Access Journals (Sweden)
Vasconcelos Ana
2008-04-01
Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.
Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
2000-06-01
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Spin foam models as energetic causal sets
Cortês, Marina
2014-01-01
Energetic causal sets are causal sets endowed by a flow of energy-momentum between causally related events. These incorporate a novel mechanism for the emergence of space-time from causal relations. Here we construct a spin foam model which is also an energetic causal set model. This model is closely related to the model introduced by Wieland, and this construction makes use of results used there. What makes a spin foam model also an energetic causal set is Wieland's identification of new momenta, conserved at events (or four-simplices), whose norms are not mass, but the volume of tetrahedra. This realizes the torsion constraints, which are missing in previous spin foam models, and are needed to relate the connection dynamics to those of the metric, as in general relativity. This identification makes it possible to apply the new mechanism for the emergence of space-time to a spin foam model.
Fault Detection of Wind Turbines with Uncertain Parameters: A Set-Membership Approach
Directory of Open Access Journals (Sweden)
Thomas Bak
2012-07-01
Full Text Available In this paper a set-membership approach for fault detection of a benchmark wind turbine is proposed. The benchmark represents relevant fault scenarios in the control system, including sensor, actuator and system faults. In addition we also consider parameter uncertainties and uncertainties on the torque coefficient. High noise on the wind speed measurement, nonlinearities in the aerodynamic torque and uncertainties on the parameters make fault detection a challenging problem. We use an effective wind speed estimator to reduce the noise on the wind speed measurements. A set-membership approach is used generate a set that contains all states consistent with the past measurements and the given model of the wind turbine including uncertainties and noise. This set represents all possible states the system can be in if not faulty. If the current measurement is not consistent with this set, a fault is detected. For representation of these sets we use zonotopes and for modeling of uncertainties we use matrix zonotopes, which yields a computationally efficient algorithm. The method is applied to the wind turbine benchmark problem without and with uncertainties. The result demonstrates the effectiveness of the proposed method compared to other proposed methods applied to the same problem. An advantage of the proposed method is that there is no need for threshold design, and it does not produce positive false alarms. In the case where uncertainty on the torque lookup table is introduced, some faults are not detectable. Previous research has not addressed this uncertainty. The method proposed here requires equal or less detection time than previous results.
HEMODOSE: A Set of Multi-parameter Biodosimetry Tools
Hu, Shaowen; Blakely, William F.; Cucinotta, Francis A.
2014-01-01
After the events of September 11, 2001 and recent events at the Fukushima reactors in Japan, there is an increasing concern of the occurrence of nuclear and radiological terrorism or accidents that may result in large casualty in densely populated areas. To guide medical personnel in their clinical decisions for effective medical management and treatment of the exposed individuals, biological markers are usually applied to examine the radiation induced changes at different biological levels. Among these the peripheral blood cell counts are widely used to assess the extent of radiation induced injury. This is due to the fact that hematopoietic system is the most vulnerable part of the human body to radiation damage. Particularly, the lymphocyte, granulocyte, and platelet cells are the most radiosensitive of the blood elements, and monitoring their changes after exposure is regarded as the most practical and best laboratory test to estimate radiation dose. The HEMODOSE web tools are built upon solid physiological and pathophysiological understanding of mammalian hematopoietic systems, and rigorous coarse-grained biomathematical modeling and validation. Using single or serial granulocyte, lymphocyte, leukocyte, or platelet counts after exposure, these tools can estimate absorbed doses of adult victims very rapidly and accurately. Some patient data in historical accidents are utilized as examples to demonstrate the capabilities of these tools as a rapid point-of-care diagnostic or centralized high-throughput assay system in a large scale radiological disaster scenario. Unlike previous dose prediction algorithms, the HEMODOSE web tools establish robust correlations between the absorbed doses and victim's various types of blood cell counts not only in the early time window (1 or 2 days), but also in very late phase (up to 4 weeks) after exposure
Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds
Directory of Open Access Journals (Sweden)
Indrajeet Chaubey
2010-11-01
Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.
Producing a Set of Models for the Iron Homeostasis Network
Directory of Open Access Journals (Sweden)
Nicolas Mobilia
2013-08-01
Full Text Available This paper presents a method for modeling biological systems which combines formal techniques on intervals, numerical simulations and satisfaction of Signal Temporal Logic (STL formulas. The main modeling challenge addressed by this approach is the large uncertainty in the values of the parameters due to the experimental difficulties of getting accurate biological data. This method considers intervals for each parameter and a formal description of the expected behavior of the model. In a first step, it produces reduced intervals of possible parameter values. Then by performing a systematic search in these intervals, it defines sets of parameter values used in the next step. This procedure aims at finding a sub-space where the model robustly behaves as expected. We apply this method to the modeling of the cellular iron homeostasis network in erythroid progenitors. The produced model describes explicitly the regulation mechanism which acts at the translational level.
Rough set models of Physarum machines
Pancerz, Krzysztof; Schumann, Andrew
2015-04-01
In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter counting in models with global symmetries
Energy Technology Data Exchange (ETDEWEB)
Berger, Joshua [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: jb454@cornell.edu; Grossman, Yuval [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: yuvalg@lepp.cornell.edu
2009-05-18
We present rules for determining the number of physical parameters in models with exact flavor symmetries. In such models the total number of parameters (physical and unphysical) needed to described a matrix is less than in a model without the symmetries. Several toy examples are studied in order to demonstrate the rules. The use of global symmetries in studying the minimally supersymmetric standard model (MSSM) is examined.
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Cosmological models with constant deceleration parameter
Energy Technology Data Exchange (ETDEWEB)
Berman, M.S.; de Mello Gomide, F.
1988-02-01
Berman presented elsewhere a law of variation for Hubble's parameter that yields constant deceleration parameter models of the universe. By analyzing Einstein, Pryce-Hoyle and Brans-Dicke cosmologies, we derive here the necessary relations in each model, considering a perfect fluid.
On linear models and parameter identifiability in experimental biological systems.
Lamberton, Timothy O; Condon, Nicholas D; Stow, Jennifer L; Hamilton, Nicholas A
2014-10-07
A key problem in the biological sciences is to be able to reliably estimate model parameters from experimental data. This is the well-known problem of parameter identifiability. Here, methods are developed for biologists and other modelers to design optimal experiments to ensure parameter identifiability at a structural level. The main results of the paper are to provide a general methodology for extracting parameters of linear models from an experimentally measured scalar function - the transfer function - and a framework for the identifiability analysis of complex model structures using linked models. Linked models are composed by letting the output of one model become the input to another model which is then experimentally measured. The linked model framework is shown to be applicable to designing experiments to identify the measured sub-model and recover the input from the unmeasured sub-model, even in cases that the unmeasured sub-model is not identifiable. Applications for a set of common model features are demonstrated, and the results combined in an example application to a real-world experimental system. These applications emphasize the insight into answering "where to measure" and "which experimental scheme" questions provided by both the parameter extraction methodology and the linked model framework. The aim is to demonstrate the tools' usefulness in guiding experimental design to maximize parameter information obtained, based on the model structure.
Object–Parameter Approaches to Predicting Unknown Data in an Incomplete Fuzzy Soft Set
Directory of Open Access Journals (Sweden)
Liu Yaya
2017-03-01
Full Text Available The research on incomplete fuzzy soft sets is an integral part of the research on fuzzy soft sets and has been initiated recently. In this work, we first point out that an existing approach to predicting unknown data in an incomplete fuzzy soft set suffers from some limitations and then we propose an improved method. The hidden information between both objects and parameters revealed in our approach is more comprehensive. Furthermore, based on the similarity measures of fuzzy sets, a new adjustable object-parameter approach is proposed to predict unknown data in incomplete fuzzy soft sets. Data predicting converts an incomplete fuzzy soft set into a complete one, which makes the fuzzy soft set applicable not only to decision making but also to other areas. The compared results elaborated through rate exchange data sets illustrate that both our improved approach and the new adjustable object-parameter one outperform the existing method with respect to forecasting accuracy.
Trait Characteristics of Diffusion Model Parameters
Directory of Open Access Journals (Sweden)
Anna-Lena Schubert
2016-07-01
Full Text Available Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.
Parameter identification in the logistic STAR model
DEFF Research Database (Denmark)
Ekner, Line Elvstrøm; Nejstgaard, Emil
We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Application of lumped-parameter models
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)
Level Set Modeling of Transient Electromigration Grooving
Khenner, M.; Averbuch, A.; Israeli, M.; Nathan, M; Glickman, E.
2000-01-01
A numerical investigation of grain-boundary (GB) grooving by means of the Level Set (LS) method is carried out. GB grooving is emerging as a key element of electromigration drift in polycrystalline microelectronic interconnects, as evidenced by a number of recent studies. The purpose of the present study is to provide an efficient numerical simulation, allowing a parametric study of the effect of key physical parameters (GB and surface diffusivities, grain size, current density, etc) on the e...
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Statefinder parameters in two dark energy models
Panotopoulos, Grigoris
2007-01-01
The statefinder parameters ($r,s$) in two dark energy models are studied. In the first, we discuss in four-dimensional General Relativity a two fluid model, in which dark energy and dark matter are allowed to interact with each other. In the second model, we consider the DGP brane model generalized by taking a possible energy exchange between the brane and the bulk into account. We determine the values of the statefinder parameters that correspond to the unique attractor of the system at hand. Furthermore, we produce plots in which we show $s,r$ as functions of red-shift, and the ($s-r$) plane for each model.
Wind Farm Decentralized Dynamic Modeling With Parameters
DEFF Research Database (Denmark)
Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran;
2010-01-01
Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...
Delineating Parameter Unidentifiabilities in Complex Models
Raman, Dhruva V; Papachristodoulou, Antonis
2016-01-01
Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...
Transformations among CE–CVM model parameters for multicomponent systems
Indian Academy of Sciences (India)
B Nageswara Sarma; Shrikant Lele
2005-06-01
In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.
Optimal parameters for the FFA-Beddoes dynamic stall model
Energy Technology Data Exchange (ETDEWEB)
Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)
1999-03-01
Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Modelling occupants’ heating set-point prefferences
DEFF Research Database (Denmark)
Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn
2011-01-01
consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily......Discrepancies between simulated and actual occupant behaviour can offset the actual energy consumption by several orders of magnitude compared to simulation results. Thus, there is a need to set up guidelines to increase the reliability of forecasts of environmental conditions and energy...
Numerical modeling of piezoelectric transducers using physical parameters.
Cappon, Hans; Keesman, Karel J
2012-05-01
Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.
Institute of Scientific and Technical Information of China (English)
Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Delineating parameter unidentifiabilities in complex models
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Application of lumped-parameter models
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Liingaard, Morten
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...
Models and parameters for environmental radiological assessments
Energy Technology Data Exchange (ETDEWEB)
Miller, C W [ed.
1984-01-01
This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)
Global-scale regionalization of hydrologic model parameters
Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian
2016-05-01
Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.
Nonparametric Comparison of Two Dynamic Parameter Setting Methods in a Meta-Heuristic Approach
Directory of Open Access Journals (Sweden)
Seyhun HEPDOGAN
2007-10-01
Full Text Available Meta-heuristics are commonly used to solve combinatorial problems in practice. Many approaches provide very good quality solutions in a short amount of computational time; however most meta-heuristics use parameters to tune the performance of the meta-heuristic for particular problems and the selection of these parameters before solving the problem can require much time. This paper investigates the problem of setting parameters using a typical meta-heuristic called Meta-RaPS (Metaheuristic for Randomized Priority Search.. Meta-RaPS is a promising meta-heuristic optimization method that has been applied to different types of combinatorial optimization problems and achieved very good performance compared to other meta-heuristic techniques. To solve a combinatorial problem, Meta-RaPS uses two well-defined stages at each iteration: construction and local search. After a number of iterations, the best solution is reported. Meta-RaPS performance depends on the fine tuning of two main parameters, priority percentage and restriction percentage, which are used during the construction stage. This paper presents two different dynamic parameter setting methods for Meta-RaPS. These dynamic parameter setting approaches tune the parameters while a solution is being found. To compare these two approaches, nonparametric statistic approaches are utilized since the solutions are not normally distributed. Results from both these dynamic parameter setting methods are reported.
Sensitivity Analysis of the Optimal Parameter Settings of an LTE Packet Scheduler
Fernandez Diaz, I.; Litjens, R.; Berg, J.L. van den; Dimitrova, D.C.; Spaey, K.
2010-01-01
Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various traff
Directory of Open Access Journals (Sweden)
Baker Syed
2011-01-01
Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-10-11
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Response analysis based on smallest interval-set of parameters for structures with uncertainty
Institute of Scientific and Technical Information of China (English)
Xiao-jun WANG; Lei WANG; Zhi-ping QIU
2012-01-01
An integral analytic process from quantification to propagation based on limited uncertain parameters is investigated to deal with practical engineering problems.A new method by use of the smallest interval-set/hyper-rectangle containing all experimental data is proposed to quantify the parameter uncertainties. With the smallest parameter interval-set,the uncertainty propagation evaluation of the most favorable response and the least favorable response of the structures is studied based on the interval analysis.The relationship between the proposed interval analysis method (IAM) and the classical IAM is discussed.Two numerical examples are presented to demonstrate the feasibility and validity of the proposed method.
Determining Relative Importance and Effective Settings for Genetic Algorithm Control Parameters.
Mills, K L; Filliben, J J; Haines, A L
2015-01-01
Setting the control parameters of a genetic algorithm to obtain good results is a long-standing problem. We define an experiment design and analysis method to determine relative importance and effective settings for control parameters of any evolutionary algorithm, and we apply this method to a classic binary-encoded genetic algorithm (GA). Subsequently, as reported elsewhere, we applied the GA, with the control parameter settings determined here, to steer a population of cloud-computing simulators toward behaviors that reveal degraded performance and system collapse. GA-steered simulators could serve as a design tool, empowering system engineers to identify and mitigate low-probability, costly failure scenarios. In the existing GA literature, we uncovered conflicting opinions and evidence regarding key GA control parameters and effective settings to adopt. Consequently, we designed and executed an experiment to determine relative importance and effective settings for seven GA control parameters, when applied across a set of numerical optimization problems drawn from the literature. This paper describes our experiment design, analysis, and results. We found that crossover most significantly influenced GA success, followed by mutation rate and population size and then by rerandomization point and elite selection. Selection method and the precision used within the chromosome to represent numerical values had least influence. Our findings are robust over 60 numerical optimization problems.
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
Iterative integral parameter identification of a respiratory mechanics model
Directory of Open Access Journals (Sweden)
Schranz Christoph
2012-07-01
Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Analysis of Modeling Parameters on Threaded Screws.
Energy Technology Data Exchange (ETDEWEB)
Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...
SPOTting model parameters using a ready-made Python package
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for
The Lund Model at Nonzero Impact Parameter
Janik, R A; Janik, Romuald A.; Peschanski, Robi
2003-01-01
We extend the formulation of the longitudinal 1+1 dimensional Lund model to nonzero impact parameter using the minimal area assumption. Complete formulae for the string breaking probability and the momenta of the produced mesons are derived using the string worldsheet Minkowskian helicoid geometry. For strings stretched into the transverse dimension, we find probability distribution with slope linear in m_T similar to the statistical models but without any thermalization assumptions.
IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL
Institute of Scientific and Technical Information of China (English)
Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao
2004-01-01
The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.
SIM parameter-based security for mobile e-commerce settings
Directory of Open Access Journals (Sweden)
Francisco Orlando Martínez Pabón
2010-04-01
Full Text Available Security requirements are more demanding in the e-commerce domain. However, mobile e -commerce settings not only insist on security requirements, they also require balance between security levels and hardware and usability device ability. These features require designing models having simple authentication and authorisation scheme which also ensures information integrity for each e -transaction. The Mobile and Wireless Applications’ Development Interest Group W@Pcolombia thus developed the P3SIM platform so that mobile applications might include SIM parameter-based security features. The P3SIM platform’s framework and compilation and simulation settings combines the advantages of identification provided by the SIM module with the security features provided by SATSA and Java Card APIs for Java ME environments, one of the most-used platforms for mobile application development. Developing an m-commerce-based prototype not only shows the platform’s ability to operate in secure environments, it also shows its ability to comply with environmental security requirements.
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package.
Directory of Open Access Journals (Sweden)
Tobias Houska
Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
Condition Parameter Modeling for Anomaly Detection in Wind Turbines
Directory of Open Access Journals (Sweden)
Yonglong Yan
2014-05-01
Full Text Available Data collected from the supervisory control and data acquisition (SCADA system, used widely in wind farms to obtain operational and condition information about wind turbines (WTs, is of important significance for anomaly detection in wind turbines. The paper presents a novel model for wind turbine anomaly detection mainly based on SCADA data and a back-propagation neural network (BPNN for automatic selection of the condition parameters. The SCADA data sets are determined through analysis of the cumulative probability distribution of wind speed and the relationship between output power and wind speed. The automatic BPNN-based parameter selection is for reduction of redundant parameters for anomaly detection in wind turbines. Through investigation of cases of WT faults, the validity of the automatic parameter selection-based model for WT anomaly detection is verified.
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent
A mesoscopic network model for permanent set in crosslinked elastomers
Energy Technology Data Exchange (ETDEWEB)
Weisgraber, T H; Gee, R H; Maiti, A; Clague, D S; Chinn, S; Maxwell, R S
2009-01-29
A mesoscopic computational model for polymer networks and composites is developed as a coarse-grained representation of the composite microstructure. Unlike more complex molecular dynamics simulations, the model only considers the effects of crosslinks on mechanical behavior. The elastic modulus, which depends only on the crosslink density and parameters in the bond potential, is consistent with rubber elasticity theory, and the network response satisfies the independent network hypothesis of Tobolsky. The model, when applied to a commercial filled silicone elastomer, quantitatively reproduces the experimental permanent set and stress-strain response due to changes in the crosslinked network from irradiation.
Order Parameters of the Dilute A Models
Warnaar, S O; Seaton, K A; Nienhuis, B
1993-01-01
The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.
Particle filters for random set models
Ristic, Branko
2013-01-01
“Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. The resulting algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...
Testing Linear Models for Ability Parameters in Item Response Models
Glas, Cees A.W.; Hendrawan, Irene
2005-01-01
Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like
Quantitative evaluation of ozone and selected climate parameters in a set of EMAC simulations
Directory of Open Access Journals (Sweden)
M. Righi
2015-03-01
Full Text Available Four simulations with the ECHAM/MESSy Atmospheric Chemistry (EMAC model have been evaluated with the Earth System Model Validation Tool (ESMValTool to identify differences in simulated ozone and selected climate parameters that resulted from (i different setups of the EMAC model (nudged vs. free-running and (ii different boundary conditions (emissions, sea surface temperatures (SSTs and sea ice concentrations (SICs. To assess the relative performance of the simulations, quantitative performance metrics are calculated consistently for the climate parameters and ozone. This is important for the interpretation of the evaluation results since biases in climate can impact on biases in chemistry and vice versa. The observational data sets used for the evaluation include ozonesonde and aircraft data, meteorological reanalyses and satellite measurements. The results from a previous EMAC evaluation of a model simulation with nudging towards realistic meteorology in the troposphere have been compared to new simulations with different model setups and updated emission data sets in free-running time slice and nudged quasi chemistry-transport model (QCTM mode. The latter two configurations are particularly important for chemistry-climate projections and for the quantification of individual sources (e.g., the transport sector that lead to small chemical perturbations of the climate system, respectively. With the exception of some specific features which are detailed in this study, no large differences that could be related to the different setups (nudged vs. free-running of the EMAC simulations were found, which offers the possibility to evaluate and improve the overall model with the help of shorter nudged simulations. The main differences between the two setups is a better representation of the tropospheric and stratospheric temperature in the nudged simulations, which also better reproduce stratospheric water vapor concentrations, due to the improved
Identification of slow molecular order parameters for Markov model construction
Perez-Hernandez, Guillermo; Giorgino, Toni; de Fabritiis, Gianni; Noé, Frank
2013-01-01
A goal in the kinetic characterization of a macromolecular system is the description of its slow relaxation processes, involving (i) identification of the structural changes involved in these processes, and (ii) estimation of the rates or timescales at which these slow processes occur. Most of the approaches to this task, including Markov models, Master-equation models, and kinetic network models, start by discretizing the high-dimensional state space and then characterize relaxation processes in terms of the eigenvectors and eigenvalues of a discrete transition matrix. The practical success of such an approach depends very much on the ability to finely discretize the slow order parameters. How can this task be achieved in a high-dimensional configuration space without relying on subjective guesses of the slow order parameters? In this paper, we use the variational principle of conformation dynamics to derive an optimal way of identifying the "slow subspace" of a large set of prior order parameters - either g...
Abul Kashem, Saad Bin; Ektesabi, Mehran; Nagarajah, Romesh
2012-07-01
This study examines the uncertainties in modelling a quarter car suspension system caused by the effect of different sets of suspension parameters of a corresponding mathematical model. To overcome this problem, 11 sets of identified parameters of a suspension system have been compared, taken from the most recent published work. From this investigation, a set of parameters were chosen which showed a better performance than others in respect of peak amplitude and settling time. These chosen parameters were then used to investigate the performance of a new modified continuous skyhook control strategy with adaptive gain that dictates the vehicle's semi-active suspension system. The proposed system first captures the road profile input over a certain period. Then it calculates the best possible value of the skyhook gain (SG) for the subsequent process. Meanwhile the system is controlled according to the new modified skyhook control law using an initial or previous value of the SG. In this study, the proposed suspension system is compared with passive and other recently reported skyhook controlled semi-active suspension systems. Its performances have been evaluated in terms of ride comfort and road handling performance. The model has been validated in accordance with the international standards of admissible acceleration levels ISO2631 and human vibration perception.
Modelling spin Hamiltonian parameters of molecular nanomagnets.
Gupta, Tulika; Rajaraman, Gopalan
2016-07-12
Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs.
Estimating qualitative parameters for assessment of body balance in a simulated ambulatory setting
Meulen, van Fokke B.; Reenalda, Jasper; Veltink, Peter H.
2013-01-01
Continuous daily-life monitoring of balance control of stroke survivors in an ambulatory setting, is essential for optimal guidance of rehabilitation. The purpose of this study is to demonstrate the relation between qualitative parameters of body balance while measuring in stroke patients in a simul
Modelling tourists arrival using time varying parameter
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Enhancing debris flow modeling parameters integrating Bayesian networks
Graf, C.; Stoffel, M.; Grêt-Regamey, A.
2009-04-01
Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk
Bates, P. D.; Neal, J. C.; Fewtrell, T. J.
2012-12-01
In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Localization of compact invariant sets of the Lorenz' 1984 model
Starkov, K. E.
In 1984 E. Lorenz published a paper [1] in which he proposed "the simplest possible general circulation model": dot{x} = -y^2 - z^2 - ax + aF, dot{y} = xy -bxz - y+G, dot{z} = bxy + xz -z which is referred to as the Lorenz'1984 model. The existence of chaos was shown in [1, 2] for different values of parameters. Dynamical studies of this system were realized in papers [1, 2]; [3], [4]. This paper is devoted to study of a localization problem of compact invariant sets of the Lorenz'1984 model with help of one approach elaborated in papers of Krishchenko and Starkov, see e.g. [5]. This problem is an important topic in studies of dynamics of a chaotic system because of the interest to a long-time behavior of a system. In this work we establish that all compact invariant sets of the Lorenz' 1984 model are contained in the set \\{ x le F;x^2 + y^2 + z^2 le η ^2 = {2left( {a + 2} right)F^2 + 3G^2 + 2Gsqrt {aF^2 + G^2 } }/4\\} . Further, we improve this localization with help of refining bound η using additional localizations sets. By applying cylindrical coordinates to the Lorenz' 1984 model we derive yet another localization set of the form \\{ y^2 + z^2 le G^2 (1 + b^{ - 2} )exp (4π b^{ - 1} )\\}. Finally, we discuss how to improve the final localization set and consider one example.
Lu, Zhi John; Turner, Douglas H; Mathews, David H
2006-01-01
A complete set of nearest neighbor parameters to predict the enthalpy change of RNA secondary structure formation was derived. These parameters can be used with available free energy nearest neighbor parameters to extend the secondary structure prediction of RNA sequences to temperatures other than 37 degrees C. The parameters were tested by predicting the secondary structures of sequences with known secondary structure that are from organisms with known optimal growth temperatures. Compared with the previous set of enthalpy nearest neighbor parameters, the sensitivity of base pair prediction improved from 65.2 to 68.9% at optimal growth temperatures ranging from 10 to 60 degrees C. Base pair probabilities were predicted with a partition function and the positive predictive value of structure prediction is 90.4% when considering the base pairs in the lowest free energy structure with pairing probability of 0.99 or above. Moreover, a strong correlation is found between the predicted melting temperatures of RNA sequences and the optimal growth temperatures of the host organism. This indicates that organisms that live at higher temperatures have evolved RNA sequences with higher melting temperatures.
Directory of Open Access Journals (Sweden)
Aušra ADOMAITIENĖ
2011-11-01
Full Text Available During the manufacturing of fabric of different raw material there was noticed, that after removing the fabric from weaving loom and after stabilization of fabric structure, the changes of parameters of fabric structure are not regular. During this investigation it was analysed, how weaving loom technological parameters (heald cross moment and initial tension of warp should be chosen and how to predict the changes of fabric structure parameters and its mechanical properties. The dependencies of changes of half-wool fabric structure parameters (weft setting, fabric thickness and projections of fabric cross-section and mechanical properties (breaking force, elongation at break, static friction force and static friction coefficient on weaving loom setting parameters (heald cross moment and initial warp tension were analysed. The orthogonal Box plan of two factors was used, the 3-D dependencies were drawn, and empirical equations of these dependencies were established.http://dx.doi.org/10.5755/j01.ms.17.4.780
Institute of Scientific and Technical Information of China (English)
Yu Dong-Chuan; Wu Ai-Guo
2006-01-01
A novel La Shalle's invariant set theory (LSIST) based adaptive asymptotic synchronization (LSISAAS) method is proposed to asymptotically synchronize Duffing system with unknown parameters which also are considered as system states. The LSISASS strategy depends on the only information, i.e. one state of the master system. According to the LSIST, the LSISASS method can asymptotically synchronize fully the states of the master system and the unknown system parameters as well. Simulation results also validate that the LSISAAS approach can obtain asymptotic synchronization.
Directory of Open Access Journals (Sweden)
Ayla Sayli
2016-08-01
Full Text Available Data science for engineers is the most recent research area which suggests to analyse large data sets in order to find data analytics and use them for better designing and modelling. Ship design practice reveals that conceptual ship design is critically important for a successful basic design. Conceptual ship design needs to identify the true set of design variables influencing vessel performance and costs to define the best possible basic design by the use of performance prediction model. This model can be constructed by design engineers. The main idea of this paper comes from this crucial idea to determine relational classification of a set of small vessels using their hull form parameters and performance characteristics defined by transfer functions of heave and pitch motions and of absolute vertical acceleration, by our in-house software application based on K-Means algorithm from data mining. This application is implemented in the C# programming language on Microsoft SQL Server database. We also use the Elbow method to estimate the true number of clusters for K-Means algorithm. The computational results show that the considered set of small vessels can be clustered in three categories according to their functional relations of their hull form parameters and transfer functions considering all cases of three loading conditions, seven ship speeds as non-dimensional Froude numbers (Fn and nine wave-length to ship-length values (λ/L.
Modeling of Parameters of Subcritical Assembly SAD
Petrochenkov, S; Puzynin, I
2005-01-01
The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.
Environmental Transport Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. A. Wasiolek
2003-06-27
], Section 6.2). Parameter values developed in this report, and the related FEPs, are listed in Table 1-1. The relationship between the parameters and FEPs was based on a comparison of the parameter definition and the FEP descriptions as presented in BSC (2003 [160699], Section 6.2). The parameter values developed in this report support the biosphere model and are reflected in the TSPA through the biosphere dose conversion factors (BDCFs). Biosphere modeling focuses on radionuclides screened for the TSPA-LA (BSC 2002 [160059]). The same list of radionuclides is used in this analysis (Section 6.1.4). The analysis considers two human exposure scenarios (groundwater and volcanic ash) and climate change (Section 6.1.5). This analysis combines and revises two previous reports, ''Transfer Coefficient Analysis'' (CRWMS M&O 2000 [152435]) and ''Environmental Transport Parameter Analysis'' (CRWMS M&O 2001 [152434]), because the new ERMYN biosphere model requires a redefined set of input parameters. The scope of this analysis includes providing a technical basis for the selection of radionuclide- and element-specific biosphere parameters (except for Kd) that are important for calculating BDCFs based on the available radionuclide inventory abstraction data. The environmental transport parameter values were developed specifically for use in the biosphere model and may not be appropriate for other applications.
Probabilistic Constraint Programming for Parameters Optimisation of Generative Models
Zanin, Massimiliano; Sousa, Pedro A C; Cruz, Jorge
2015-01-01
Complex networks theory has commonly been used for modelling and understanding the interactions taking place between the elements composing complex systems. More recently, the use of generative models has gained momentum, as they allow identifying which forces and mechanisms are responsible for the appearance of given structural properties. In spite of this interest, several problems remain open, one of the most important being the design of robust mechanisms for finding the optimal parameters of a generative model, given a set of real networks. In this contribution, we address this problem by means of Probabilistic Constraint Programming. By using as an example the reconstruction of networks representing brain dynamics, we show how this approach is superior to other solutions, in that it allows a better characterisation of the parameters space, while requiring a significantly lower computational cost.
DEFF Research Database (Denmark)
Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;
2016-01-01
An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavio...
Tanabe, Shuichi; Nakagawa, Hiroshi; Watanabe, Tomoyuki; Minami, Hidemi; Kano, Manabu; Urbanetz, Nora A
2016-09-10
Designing efficient, robust process parameters in drug product manufacturing is important to assure a drug's critical quality attributes. In this research, an efficient, novel procedure for a coating process parameter setting was developed, which establishes a prediction model for setting suitable input process parameters by utilizing prior manufacturing knowledge for partial least squares regression (PLSR). In the proposed procedure, target values or ranges of the output parameters are first determined, including tablet moisture content, spray mist condition, and mechanical stress on tablets. Following the preparation of predictive models relating input process parameters to corresponding output parameters, optimal input process parameters are determined using these models so that the output parameters hold within the target ranges. In predicting the exhaust air temperature output parameter, which reflects the tablets' moisture content, PLSR was employed based on prior measured data (such as batch records of other products rather than design of experiments), leading to minimal new experiments. The PLSR model was revealed to be more accurate at predicting the exhaust air temperature than a conventional semi-empirical thermodynamic model. A commercial scale verification demonstrated that the proposed process parameter setting procedure enabled assurance of the quality of tablet appearance without any trial-and-error experiments.
Moose models with vanishing $S$ parameter
Casalbuoni, R; Dominici, Daniele
2004-01-01
In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the $S$ parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on $K$ SU(2) gauge groups, $K+1$ chiral fields and electroweak groups $SU(2)_L$ and $U(1)_Y$ at the ends of the chain of the moose. $S$ vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical non local field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of $S$ through an exponential behavior of the link couplings as suggested by Randall Sundrum metric.
Model parameters for simulation of physiological lipids
McGlinchey, Nicholas
2016-01-01
Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972
Izadi, Nosrat; Rashidi, Ali Morad; Horri, Bahman Amini; Mosoudi, Mohamad Reza; Bozorgzadeh, Hamid Reza; Zeraatkar, Ahmad
2011-06-01
In this work methane was decomposed to hydrogen and carbon to determine its kinetic behavior during reaction over a Co-Mo-MgO supported catalyst using the CVD (Chemical Vapor Deposition) technique. Decomposition of methane molecules was performed in a continuous fixed bed reactor to obtain data to simulate methane decomposition in a gas phase heterogeneous media. The products and reactants of reaction were analyzed by molecular sieve column followed by GC-analysis of the fractions to determine the amount of product converted or reactant consumed. The synthesis of single-walled carbon nanotubes was performed at atmospheric pressure, different temperatures and reactant concentrations. The experimental data analyzed to suggest the formula for calculation of the initial specific reaction rate of the carbon nanotubes synthesis, were fitted by several mathematical models derived from different mechanisms based on Longmuir-hinshelwood expression. The suggested mechanism according to dissociation adsorption of methane seems to explain the catalytic performance in the range of operating conditions studied. The apparent activation energy for the growth of SWNTs was estimated according to Arrhenius equation. The as grown SWNTs products were characterized by SEM, TEM and Raman spectroscopy after purification. The catalyst deactivation was found to be dependent on the time, reaction temperature and partial pressure of methane and indicated that the reaction of deactivation can be modeled by a simple apparent second order of reaction.
Reachable set modeling and engagement analysis of exoatmospheric interceptor
Institute of Scientific and Technical Information of China (English)
Chai Hua; Liang Yangang; Chen Lei; Tang Guojin
2014-01-01
A novel reachable set (RS) model is developed within a framework of exoatmospheric interceptor engagement analysis. The boost phase steering scheme and trajectory distortion mech-anism of the interceptor are firstly explored. A mathematical model of the distorted RS is then for-mulated through a dimension–reduction analysis. By treating the outer boundary of the RS on sphere surface as a spherical convex hull, two relevant theorems are proposed and the RS envelope is depicted by the computational geometry theory. Based on RS model, the algorithms of intercept window analysis and launch parameters determination are proposed, and numerical simulations are carried out for interceptors with different energy or launch points. Results show that the proposed method can avoid intensive on-line computation and provide an accurate and effective approach for interceptor engagement analysis. The suggested RS model also serves as a ready reference to other related problems such as interceptor effectiveness evaluation and platform disposition.
Reachable set modeling and engagement analysis of exoatmospheric interceptor
Directory of Open Access Journals (Sweden)
Chai Hua
2014-12-01
Full Text Available A novel reachable set (RS model is developed within a framework of exoatmospheric interceptor engagement analysis. The boost phase steering scheme and trajectory distortion mechanism of the interceptor are firstly explored. A mathematical model of the distorted RS is then formulated through a dimension–reduction analysis. By treating the outer boundary of the RS on sphere surface as a spherical convex hull, two relevant theorems are proposed and the RS envelope is depicted by the computational geometry theory. Based on RS model, the algorithms of intercept window analysis and launch parameters determination are proposed, and numerical simulations are carried out for interceptors with different energy or launch points. Results show that the proposed method can avoid intensive on-line computation and provide an accurate and effective approach for interceptor engagement analysis. The suggested RS model also serves as a ready reference to other related problems such as interceptor effectiveness evaluation and platform disposition.
Assessment of Governor Control Parameter Settings of a Submarine Diesel Engine
2013-03-01
University of Technology 15. Ogata , K. (1997) Modern control engineering. Upper Saddle River, NJ, Prentice-Hall 16. DiStephano, J., Stubberud, A. and...UNCLASSIFIED Assessment of Governor Control Parameter Settings of a Submarine Diesel Engine Peter Hield and Michael Newman...generators to provide power for propulsion and the hotel load. The governor, often a proportional-integral controller , attempts to maintain a constant
Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.
Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W
2015-07-23
A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
On retrial queueing model with fuzzy parameters
Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng
2007-01-01
This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.
Prediction of interest rate using CKLS model with stochastic parameters
Energy Technology Data Exchange (ETDEWEB)
Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)
2014-06-19
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
Institute of Scientific and Technical Information of China (English)
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Uncertainty Quantification for Optical Model Parameters
Lovell, A E; Sarich, J; Wild, S M
2016-01-01
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...
Numerical modeling of partial discharges parameters
Directory of Open Access Journals (Sweden)
Kartalović Nenad M.
2016-01-01
Full Text Available In recent testing of the partial discharges or the use for the diagnosis of insulation condition of high voltage generators, transformers, cables and high voltage equipment develops rapidly. It is a result of the development of electronics, as well as, the development of knowledge about the processes of partial discharges. The aim of this paper is to contribute the better understanding of this phenomenon of partial discharges by consideration of the relevant physical processes in isolation materials and isolation systems. Prebreakdown considers specific processes, and development processes at the local level and their impact on specific isolation material. This approach to the phenomenon of partial discharges needed to allow better take into account relevant discharge parameters as well as better numerical model of partial discharges.
Invariant sets techniques for Youla-Kučera parameter synthesis
Luca, Anamaria; Rodriguez-Ayerbe, Pedro; Dumur, Didier
2011-09-01
This article addresses an invariant sets approach for Youla-Kučera parameter synthesis using linear matrix inequality (LMI) techniques. Given a linear discrete-time observer-based system affected by bounded disturbances and constraints, the proposed technique furnishes the best Youla parameter in terms of finding an invariant ellipsoidal set satisfying the constraints and having the maximal ellipsoidal projection on the state space. Compared with the results obtained for an observer-based design, the synthesis of a Youla parameter provides a larger ellipsoidal projection and an improved sensitivity function. The price to pay for these achievements in terms of robustness is usually a slow closed-loop performance with degraded complementary sensitivity function. In order to obtain a compromise between robustness and performance two methods are proposed: the first method imposes a new bound on the Lyapunov function decreasing speed and the second refers to the pole placement concept. The aforementioned approaches are finally validated in simulation considering position control of an induction motor.
Multiscale Parameter Regionalization for consistent global water resources modelling
Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.
2017-04-01
Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other
Benchmark data set for wheat growth models
DEFF Research Database (Denmark)
Asseng, S; Ewert, F.; Martre, P;
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, max...
Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D
Institute of Scientific and Technical Information of China (English)
KONG Li-feng; ZHAO Hai-ying; XU Guang-mei
2014-01-01
Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.
Parameter Optimisation for the Behaviour of Elastic Models over Time
DEFF Research Database (Denmark)
Mosegaard, Jesper
2004-01-01
Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets ...
On the Dimension of Sliced Measures and the Set of Exceptional Parameters
Orponen, Tuomas
2010-01-01
Let $(\\Omega,d)$ be a compact separable metric space, let $B \\subset \\Omega$ be a Borel set with $\\dim B > 1$, and let $\\pi_{\\lambda} \\colon \\Omega \\to \\R$, $\\lambda \\in J$, be a family of Lipschitz-mappings parametrised by an open interval $J \\subset \\R$. We consider some sufficient conditions for the mappings $\\pi_{\\lambda}$ under which, for almost all parameters $\\lambda \\in J$, it holds that $\\dim [B \\cap \\pi_{\\lambda}^{-1}\\{t\\}] = \\dim B - 1$ for $\\calL^{1}$ positively many points $t \\in \\R$. More precisely, we show that, under our hypotheses on the mappings $\\pi_{\\lambda}$, this happens for every $\\lambda \\in J \\setminus E$, where $E \\subset J$ is an exceptional set of dimension $\\dim E \\leq 2 - \\dim B$. We also indicate that this estimate cannot be improved.
Model Identification of Linear Parameter Varying Aircraft Systems
Fujimore, Atsushi; Ljung, Lennart
2007-01-01
This article presents a parameter estimation of continuous-time polytopic models for a linear parameter varying (LPV) system. The prediction error method of linear time invariant (LTI) models is modified for polytopic models. The modified prediction error method is applied to an LPV aircraft system whose varying parameter is the flight velocity and model parameters are the stability and control derivatives (SCDs). In an identification simulation, the polytopic model is more suitable for expre...
Empirically modelled Pc3 activity based on solar wind parameters
Directory of Open Access Journals (Sweden)
T. Raita
2010-09-01
Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through
Preference Mining Using Neighborhood Rough Set Model on Two Universes
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method. PMID:28044074
Preference Mining Using Neighborhood Rough Set Model on Two Universes.
Zeng, Kai
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.
Verification of the optimum tropospheric parameters setting for the kinematic PPP analysis
Hirata, Y.; Ohta, Y.
2015-12-01
Kinematic GNSS analysis is useful for extraction of the crustal deformation phenomena between seconds to one day such as coseismic and postseismic deformation after a large earthquake. The kinematic GNSS analysis, however, have fundamental difficulties for the separation between unknown parameters such as the site coordinate and tropospheric parameters, caused by a strong correlation between each other. Thus, we focused on the improvement of the separation precision between coordinate time series of kinematic PPP and wet zenith tropospheric delay (WZTD) based on the comprehensive search of the parameter space. We used GIPSY-OASIS II Ver. 6.3 software for kinematic PPP processing of whole GEONET sites in 10 March 2011. We applied the every 6 hours nominal WZTD value as a priori information based on the ECMWF global numerical climate model. For the coordinate time series and tropospheric parameters, we assumed white noise and random walk stochastic process, respectively. These unknown parameters are very sensitive to assumed process noise for each stochastic process. Thus, we searched for the optimum two variable parameters; wet zenith tropospheric parameter (named as TROP) and its gradient (named as GRAD). We defined the optimum parameters, which minimized the standard deviation of coordinate time series.We firstly checked the spatial distribution of optimum pair of TROP and GRAD. Even though the optimum parameters showed the certain range (TROP: 2×10-8 ~ 6×10-7 (horizontal), 5.5×10-9 ~ 2×10-8 (vertical); GRAD: 2×10-10 ~ 6×10-9 (horizontal), 2×10-10 ~ 1×10-8 (vertical) (unit: km·s-½)), we found they showed the large diversity. It suggests there are strong heterogeneity of atmospheric state. We also estimated temporal variations of optimum TROP and GRAD in specific site. We analyzed the data through 2010 at GEONET 940098 station located in the most southern part of Kyusyu, Japan. Obtained time series of optimum GRAD showed clear annual variation, and the
Integer Set Compression and Statistical Modeling
DEFF Research Database (Denmark)
Larsson, N. Jesper
2014-01-01
Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...... enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities...
[Calculation of parameters in forest evapotranspiration model].
Wang, Anzhi; Pei, Tiefan
2003-12-01
Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.
EFFECT OF SETTING THE PARAMETERS OF FLAME WEEDER ON WEED CONTROL EFFECTIVENESS
Directory of Open Access Journals (Sweden)
Miroslav Mojžiš
2013-12-01
Full Text Available Unconventional ways of growing plants, when we return to non-chemical methods of controlling weeds, require new weed control methods. One of the few physical methods, which found wider application in practice, is a flame weeder with heat burners based on the use of gas (LPG. However, the process of practical use of this flame weeder has a number of factors that positively or negatively affect the effectiveness of weed control. A precise setting of flame weeders is influenced, for example by weed species, weed growth stage, weather, type of crop grown, but also heat transmission and heat absorption by plant. Many variables that enter into the process must be eliminated for their negative impacts on achieving the best results in fighting against weeds. In this paper, we have focused on naming these parameters, on field trials that confirm the justification of the precise setting of parameters, and recommendations for practice to achieve a higher efficiency of thermal weed control.
Reconstructing parameters of spreading models from partial observations
Lokhov, Andrey Y
2016-01-01
Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs.
Transfer function modeling of damping mechanisms in distributed parameter models
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Whale, B E
2012-01-01
The abstract boundary uses sets of curves with the bounded parameter property (b.p.p.) to classify the elements of the abstract boundary into regular points, singular points, points at infinity and so on. Building on the material of Part one of this two part series, we show how this classification changes when the set of b.p.p. satisfying curves changes.
On the modeling of internal parameters in hyperelastic biological materials
Giantesio, Giulia
2016-01-01
This paper concerns the behavior of hyperelastic energies depending on an internal parameter. First, the situation in which the internal parameter is a function of the gradient of the deformation is presented. Second, two models where the parameter describes the activation of skeletal muscle tissue are analyzed. In those models, the activation parameter depends on the strain and it is important to consider the derivative of the parameter with respect to the strain in order to capture the proper behavior of the stress.
Determining extreme parameter correlation in ground water models
DEFF Research Database (Denmark)
Hill, Mary Cole; Østerby, Ole
2003-01-01
In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters...
Model comparisons and genetic and environmental parameter ...
African Journals Online (AJOL)
arc
South African Journal of Animal Science 2005, 35 (1) ... Genetic and environmental parameters were estimated for pre- and post-weaning average daily gain ..... and BWT (and medium maternal genetic correlations) indicates that these traits ...
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
RELATIONS BETWEEN PARAMETERS IN THE LINEAR RESERVOIR AND KINEMATIC WAVE MODELS
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The objective of this study is to explore the relation between the sets of parameters involved in the kinematic wave model and the linear reservoir model in runoff analyses. In the study, an approximate function of recession curves, obtained from the results of indoor rainfall-runoff experiments, is used to characterize the parameters involved in the two models and to explore the relations between the corresponding sets of parameters. Additionally, the observed hydrographs resulting from different rainfall durations and intensities on both bare and weed hillsides obtained from the rainfall experiments are simulated applying the two models. The simulation results indicate the applicability of the conversion relation between the two sets of parameters, which were determined based on the recession constants. This provides a better understanding of the link between the two models.
Gray Cerebrovascular Image Skeleton Extraction Algorithm Using Level Set Model
Directory of Open Access Journals (Sweden)
Jian Wu
2010-06-01
Full Text Available The ambiguity and complexity of medical cerebrovascular image makes the skeleton gained by conventional skeleton algorithm discontinuous, which is sensitive at the weak edges, with poor robustness and too many burrs. This paper proposes a cerebrovascular image skeleton extraction algorithm based on Level Set model, using Euclidean distance field and improved gradient vector flow to obtain two different energy functions. The first energy function controls the obtain of topological nodes for the beginning of skeleton curve. The second energy function controls the extraction of skeleton surface. This algorithm avoids the locating and classifying of the skeleton connection points which guide the skeleton extraction. Because all its parameters are gotten by the analysis and reasoning, no artificial interference is needed.
Steam turbine governor modeling and parameters testing for power system simulation
Institute of Scientific and Technical Information of China (English)
Ying LI; Chufeng PENG; Zenghui YANG
2009-01-01
The theoretical modeling, parameters test and model correction for a steam turbine (ST) governor are discussed. A set of ST Governor system model for power system simulation is created based on this research. A power system simulation for an actual power grid accident is conducted using this new model and the comparison between the simulation and actual data show that the results are satisfactory.
PARAMETER SETS FOR 10 TEV AND 100 TEV MUON COLLIDERS, AND THEIR STUDY AT THE HEMC 99 WORKSHOP
Energy Technology Data Exchange (ETDEWEB)
KING,B.J.
2000-05-05
A focal point for the HEMC'99 workshop was the evaluation of straw-man parameter sets for the acceleration and collider rings of muon colliders at center of mass energies of 10 TeV and 100 TeV. These self-consistent parameter sets are presented and discussed. The methods and assumptions used in their generation are described and motivations are given for the specific choices of parameter values. The assessment of the parameter sets during the workshop is then reviewed and the implications for the feasibility of many-TeV muon colliders are evaluated. Finally, a preview is given of plans for iterating on the parameter sets and, more generally, for future feasibility studies on many-TeV muon colliders.
Sensitivity of a Shallow-Water Model to Parameters
Kazantsev, Eugene
2011-01-01
An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...
Fatigue reliability based on residual strength model with hybrid uncertain parameters
Institute of Scientific and Technical Information of China (English)
Jun Wang; Zhi-Ping Qiu
2012-01-01
The aim of this paper is to evaluate the fatigue reliability with hybrid uncertain parameters based on a residual strength model.By solving the non-probabilistic setbased reliability problem and analyzing the reliability with randomness,the fatigue reliability with hybrid parameters can be obtained.The presented hybrid model can adequately consider all uncertainties affecting the fatigue reliability with hybrid uncertain parameters.A comparison among the presented hybrid model,non-probabilistic set-theoretic model and the conventional random model is made through two typical numerical examples.The results show that the presented hybrid model,which can ensure structural security,is effective and practical.
Probing neutrino parameters with a Two-Baseline Beta-beam set-up
Agarwalla, S K; Raychaudhuri, A
2008-01-01
We discuss the prospects of exploring the neutrino mass parameters with a CERN based Beta-beam experiment using two different detectors at two different baselines. The proposed set-up consists of a 50 kton iron calorimeter (ICAL) at a baseline of around 7150 km which is roughly the magic baseline, e.g., ICAL@INO, and a 50 kton Totally Active Scintillator Detector at a distance of 730 km, e.g., at Gran Sasso. We take 8B and 8Li source ions with a boost factor $\\gamma$ of 650 for the magic baseline while for the closer detector we consider 18Ne and 6He ions with a range of Lorentz boosts. We find that the locations of the two detectors complement each other leading to an exceptional high sensitivity. With $\\gamma=650$ for 8B/8Li and $\\gamma=575$ for 18Ne/6He and total luminosity corresponding to $5\\times (1.1\\times 10^{19})$ and $5\\times (2.9\\times 10^{19})$ useful ion decays in neutrino and antineutrino modes respectively, we find that the two-detector set-up can probe maximal CP violation and establish the ne...
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Franz, K.; Hogue, T.; Barco, J.
2007-12-01
Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
Compositional modelling of distributed-parameter systems
Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.
2005-01-01
The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the
Parameter sensitivity in satellite-gravity-constrained geothermal modelling
Pastorutti, Alberto; Braitenberg, Carla
2017-04-01
The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.
Parameter Estimation and Experimental Design in Groundwater Modeling
Institute of Scientific and Technical Information of China (English)
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz
2016-04-01
Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of
On Model Complexity and Parameter Regionalization for Continental Scale Hydrologic Simulations
Rakovec, O.; Mizukami, N.; Newman, A. J.; Thober, S.; Kumar, R.; Wood, A.; Clark, M. P.; Samaniego, L. E.
2016-12-01
Assessing hydrologic model complexity and performing continental-domain model simulations has become an important objective in contemporary hydrology. We present a large-sample hydrologic modeling study to better understand (1) the benefits of parameter regionalization schemes, (2) the effects of spatially distributed/lumped model structures, and (3) the importance of selected hydrological processes on model performance. Four hydrological/land surface models (mHM, SAC, VIC, Noah-MP) are set up for 500 small to medium-sized unimpaired basins over the contiguous United States for two spatial scales: lumped and 12km grid. We performed model calibration at individual basins with and without parameter regionalization. For parameter regionalization, we use the well-established Multiscale Parameter Regionalization (MPR) technique, with the specific goal of assessing the transferability of model parameters across different time periods (from calibration to validation period), spatial scales (lumped basin scale to distributed) and locations, for different models. Our results reveal that large inter-model differences are dominated by the choice of model specific hydrological processes (in particular snow and soil moisture) over the choice of spatial discretization and/or parameter regionalization schemes. Nevertheless, parameter regionalization is crucial for parameter transferability across scale and to un-gauged locations. Last but not least, we observe that calibration of model parameters cannot always compensate for the choice of model structure.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Modeling Multisource-heterogeneous Information Based on Random Set and Fuzzy Set Theory
Institute of Scientific and Technical Information of China (English)
WEN Cheng-lin; XU Xiao-bin
2006-01-01
This paper presents a new idea, named as modeling multisensor-heterogeneous information, to incorporate the fuzzy logic methodologies with mulitsensor-multitarget system under the framework of random set theory. Firstly, based on strong random set and weak random set, the unified form to describe both data (unambiguous information) and fuzzy evidence (uncertain information) is introduced. Secondly, according to signatures of fuzzy evidence, two Bayesian-markov nonlinear measurement models are proposed to fuse effectively data and fuzzy evidence. Thirdly, by use of "the models-based signature-matching scheme", the operation of the statistics of fuzzy evidence defined as random set can be translated into that of the membership functions of relative point state variables. These works are the basis to construct qualitative measurement models and to fuse data and fuzzy evidence.
Modeling Consideration Sets and Brand Choice Using Artificial Neural Networks
B.L.K. Vroomen (Björn); Ph.H.B.F. Franses (Philip Hans); J.E.M. van Nierop
2001-01-01
textabstractThe concept of consideration sets makes brand choice a two-step process. House-holds first construct a consideration set which not necessarily includes all available brands and conditional on this set they make a final choice. In this paper we put forward a parametric econometric model f
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Meta-analysis of choice set generation effects on route choice model estimates and predictions
DEFF Research Database (Denmark)
Prato, Carlo Giacomo
2012-01-01
Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior...... modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation....... Initially, path generation techniques are implemented within a synthetic network to generate possible subjective choice sets considered by travelers. Next, ‘true model estimates’ and ‘postulated predicted routes’ are assumed from the simulation of a route choice model. Then, objective choice sets...
An automatic and effective parameter optimization method for model tuning
Directory of Open Access Journals (Sweden)
T. Zhang
2015-11-01
simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Ternary interaction parameters in calphad solution models
Energy Technology Data Exchange (ETDEWEB)
Eleno, Luiz T.F., E-mail: luizeleno@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica; Schön, Claudio G., E-mail: schoen@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Computational Materials Science Laboratory. Department of Metallurgical and Materials Engineering
2014-07-01
For random, diluted, multicomponent solutions, the excess chemical potentials can be expanded in power series of the composition, with coefficients that are pressure- and temperature-dependent. For a binary system, this approach is equivalent to using polynomial truncated expansions, such as the Redlich-Kister series for describing integral thermodynamic quantities. For ternary systems, an equivalent expansion of the excess chemical potentials clearly justifies the inclusion of ternary interaction parameters, which arise naturally in the form of correction terms in higher-order power expansions. To demonstrate this, we carry out truncated polynomial expansions of the excess chemical potential up to the sixth power of the composition variables. (author)
Exceptional sensitivity to neutrino parameters with a two-baseline Beta-beam set-up
Energy Technology Data Exchange (ETDEWEB)
Agarwalla, Sanjib Kumar [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019 (India); Department of Physics, University of Calcutta, 92 Acharya Prafulla Chandra Road, Kolkata 700009 (India)], E-mail: sanjib@hri.res.in; Choubey, Sandhya [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019 (India)], E-mail: sandhya@hri.res.in; Raychaudhuri, Amitava [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019 (India); Department of Physics, University of Calcutta, 92 Acharya Prafulla Chandra Road, Kolkata 700009 (India)], E-mail: raychaud@hri.res.in
2008-12-11
We examine the reach of a Beta-beam experiment with two detectors at carefully chosen baselines for exploring neutrino mass parameters. Locating the source at CERN, the two detectors and baselines are: (a) a 50 kton iron calorimeter (ICAL) at a baseline of around 7150 km which is roughly the magic baseline, e.g., ICAL-INO, and (b) a 50 kton Totally Active Scintillator Detector at a distance of 730 km, e.g., at Gran Sasso. We choose {sup 8}B and {sup 8}Li source ions with a boost factor {gamma} of 650 for the magic baseline while for the closer detector we consider {sup 18}Ne and {sup 6}He ions with a range of Lorentz boosts. We find that the locations of the two detectors complement each other leading to an exceptional high sensitivity. With {gamma}=650 for {sup 8}B/{sup 8}Li and {gamma}=575 for {sup 18}Ne/{sup 6}He and total luminosity corresponding to 5x(1.1x10{sup 18}) and 5x(2.9x10{sup 18}) useful ion decays in neutrino and antineutrino modes respectively, we find that the two-detector set-up can probe maximal CP violation and establish the neutrino mass ordering if sin{sup 2}2{theta}{sub 13} is 1.4x10{sup -4} and 2.7x10{sup -4}, respectively, or more. The sensitivity reach for sin{sup 2}2{theta}{sub 13} itself is 5.5x10{sup -4}. With a factor of 10 higher luminosity, the corresponding sin{sup 2}2{theta}{sub 13} reach of this set-up would be 1.8x10{sup -5}, 4.6x10{sup -5} and 5.3x10{sup -5} respectively for the above three performance indicators. CP violation can be discovered for 64% of the possible {delta}{sub CP} values for sin{sup 2}2{theta}{sub 13}{>=}10{sup -3} ({>=}8x10{sup -5}), for the standard luminosity (10 times enhanced luminosity). Comparable physics performance can be achieved in a set-up where data from CERN to INO-ICAL is combined with that from CERN to the Boulby mine in United Kingdom, a baseline of 1050 km.
Directory of Open Access Journals (Sweden)
Filip Górski
2013-09-01
Full Text Available The paper presents the results of experimental study – part of research of additive technology using thermoplastics as a build material, namely Fused Deposition Modelling (FDM. Aim of the study was to identify the relation between basic parameter of the FDM process – model orientation during manufacturing – and a dimensional accuracy and repeatability of obtained products. A set of samples was prepared – they were manufactured with variable process parameters and they were measured using 3D scanner. Significant differences in accuracy of products of the same geometry, but manufactured with different set of process parameters were observed.
An evolutionary computing approach for parameter estimation investigation of a model for cholera.
Akman, Olcay; Schaefer, Elsa
2015-01-01
We consider the problem of using time-series data to inform a corresponding deterministic model and introduce the concept of genetic algorithms (GA) as a tool for parameter estimation, providing instructions for an implementation of the method that does not require access to special toolboxes or software. We give as an example a model for cholera, a disease for which there is much mechanistic uncertainty in the literature. We use GA to find parameter sets using available time-series data from the introduction of cholera in Haiti and we discuss the value of comparing multiple parameter sets with similar performances in describing the data.
Exceptional Sensitivity to Neutrino Parameters with a Two Baseline Beta-Beam Set-up
Agarwalla, Sanjib Kumar; Raychaudhuri, Amitava
2008-01-01
We examine the reach of a Beta-beam experiment with two detectors at carefully chosen baselines for exploring neutrino mass parameters. Locating the source at CERN, the two detectors and baselines are: (a) a 50 kton iron calorimeter (ICAL) at a baseline of around 7150 km which is roughly the magic baseline, e.g., ICAL@INO, and (b) a 50 kton Totally Active Scintillator Detector at a distance of 730 km, e.g., at Gran Sasso. We choose 8B/8Li source ions with a boost factor \\gamma of 650 for the magic baseline while for the closer detector we consider 18Ne/6He ions with a range of Lorentz boosts. We find that the locations of the two detectors complement each other leading to an exceptional high sensitivity. With \\gamma=650 for 8B/8Li and \\gamma=575 for 18Ne/6He and total luminosity corresponding to 5\\times (1.1 \\times 10^{18}) and 5\\times (2.9\\times 10^{18}) useful ion decays in neutrino and antineutrino modes respectively, we find that our two detector set-up can probe maximal CP violation and establish the neu...
Hematologic parameters in raptor species in a rehabilitation setting before release.
Black, Peter A; McRuer, David L; Horne, Leigh-Ann
2011-09-01
To be considered for release, raptors undergoing rehabilitation must have recovered from their initial injury in addition to being clinically healthy. For that purpose, a good understanding of reference hematologic values is important in determining release criteria for raptors in a rehabilitation setting. In this study, retrospective data were tabulated from clinically normal birds within 10 days of release from a rehabilitation facility. Hematologic values were compiled from 71 red-tailed hawks (Buteo jamaicensis), 54 Eastern screech owls (Megascops asio), 31 Cooper's hawks (Accipiter cooperii), 30 great-horned owls (Bubo virginianus), 28 barred owls (Strix varia), 16 bald eagles (Haliaeetus leucocephalus), and 12 broad-winged hawks (Buteo platypterus). Parameters collected included a white blood cell count and differential, hematocrit, and total protein concentration. Comparisons were made among species and among previously published reports of reference hematologic values in free-ranging birds or permanently captive birds. This is the first published report of reference values for Eastern screech owls, barred owls, and broad-winged hawks; and the first prerelease reference values for all species undergoing rehabilitation. These data can be used as a reference when developing release criteria for rehabilitated raptors.
A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization
Directory of Open Access Journals (Sweden)
Lizhi Cui
2014-01-01
Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.
Modeling and parameter estimation for hydraulic system of excavator's arm
Institute of Scientific and Technical Information of China (English)
HE Qing-hua; HAO Peng; ZHANG Da-qing
2008-01-01
A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Hierarchical set of models to estimate soil thermal diffusivity
Arkhangelskaya, Tatiana; Lukyashchenko, Ksenia
2016-04-01
Soil thermal properties significantly affect the land-atmosphere heat exchange rates. Intra-soil heat fluxes depend both on temperature gradients and soil thermal conductivity. Soil temperature changes due to energy fluxes are determined by soil specific heat. Thermal diffusivity is equal to thermal conductivity divided by volumetric specific heat and reflects both the soil ability to transfer heat and its ability to change temperature when heat is supplied or withdrawn. The higher soil thermal diffusivity is, the thicker is the soil/ground layer in which diurnal and seasonal temperature fluctuations are registered and the smaller are the temperature fluctuations at the soil surface. Thermal diffusivity vs. moisture dependencies for loams, sands and clays of the East European Plain were obtained using the unsteady-state method. Thermal diffusivity of different soils differed greatly, and for a given soil it could vary by 2, 3 or even 5 times depending on soil moisture. The shapes of thermal diffusivity vs. moisture dependencies were different: peak curves were typical for sandy soils and sigmoid curves were typical for loamy and especially for compacted soils. The lowest thermal diffusivities and the smallest range of their variability with soil moisture were obtained for clays with high humus content. Hierarchical set of models will be presented, allowing an estimate of soil thermal diffusivity from available data on soil texture, moisture, bulk density and organic carbon. When developing these models the first step was to parameterize the experimental thermal diffusivity vs. moisture dependencies with a 4-parameter function; the next step was to obtain regression formulas to estimate the function parameters from available data on basic soil properties; the last step was to evaluate the accuracy of suggested models using independent data on soil thermal diffusivity. The simplest models were based on soil bulk density and organic carbon data and provided different
GIS-Based Hydrogeological-Parameter Modeling
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A regression model is proposed to relate the variation of water well depth with topographic properties (area and slope), the variation of hydraulic conductivity and vertical decay factor. The implementation of this model in GIS environment (ARC/TNFO) based on known water data and DEM is used to estimate the variation of hydraulic conductivity and decay factor of different lithoiogy units in watershed context.
ESA White paper: Atmospheric modeling: Setting Biomarkers in context
Kaltenegger, L
2008-01-01
Motivation: ESAs goal to detect biomarkers in Earth-like exoplanets in the Habitable Zone requires theoretical groundwork that needs to be done to model the influence of different parameters on the detectable biomarkers. We need to model a wide parameter space (chemical composition, pressure, evolution, interior structure and outgassing, clouds) to generate a grid of models that inform our detection strategy as well as can help characterize the spectra of the small rocky planets detected.
A Generalized Rough Set Modeling Method for Welding Process
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Modeling is essential, significant and difficult for the quality and shaping control of arc welding process. A generalized rough set based modeling method was brought forward and a dynamic predictive model for pulsed gas tungsten arc welding (GTAW) was obtained by this modeling method. The results show that this modeling method can well acquire knowledge in welding and satisfy the real life application. In addition, the results of comparison between classic rough set model and back-propagation neural network model respectively are also satisfying.
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Directory of Open Access Journals (Sweden)
George Notas
Full Text Available Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets. We have used the Normalized Difference Vegetation Index (NDVI, a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
Mirror symmetry for two parameter models, 2
Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison
1994-01-01
We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.
[Determination of Virtual Surgery Mass Point Spring Model Parameters Based on Genetic Algorithms].
Chen, Ying; Hu, Xuyi; Zhu, Qiguang
2015-12-01
Mass point-spring model is one of the commonly used models in virtual surgery. However, its model parameters have no clear physical meaning, and it is hard to set the parameter conveniently. We, therefore, proposed a method based on genetic algorithm to determine the mass-spring model parameters. Computer-aided tomography (CAT) data were used to determine the mass value of the particle, and stiffness and damping coefficient were obtained by genetic algorithm. We used the difference between the reference deformation and virtual deformation as the fitness function to get the approximate optimal solution of the model parameters. Experimental results showed that this method could obtain an approximate optimal solution of spring parameters with lower cost, and could accurately reproduce the effect of the actual deformation model as well.
The IIASA set of energy models: Its design and application
Basile, P. S.; Agnew, M.; Holzl, A.; Kononov, Y.; Papin, A.; Rogner, H. H.; Schrattenholzer, L.
1980-12-01
The models studied include an accounting framework type energy demand model, a dynamic linear programming energy supply and conversion system model, an input-output model, a macroeconomic model, and an oil trade gaming model. They are incorporated in an integrated set for long-term, global analyses. This set makes use of a highly iterative process for energy scenario projections and analyses. Each model is quite simple and straightforward in structure; a great deal of human judgement is necessary in applying the set. The models are applied to study two alternative energy scenarios for a coming fifty year period. Examples are presented revealing the wealth of information that can be obtained from multimodel techniques. Details are given for several models (equations employed, assumptions made, data used).
CHAMP: Changepoint Detection Using Approximate Model Parameters
2014-06-01
positions as a Markov chain in which the transition probabilities are defined by the time since the last changepoint: p(τi+1 = t|τi = s) = g(t− s), (1...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models (HMMs) are...length α, and maximum number of particles M . Output: Viterbi path of changepoint times and models // Initialize data structures 1: max path, prev queue
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
Directory of Open Access Journals (Sweden)
S. Pande
2015-04-01
Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...
African Journals Online (AJOL)
Preferred Customer
[3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and .... this case, the coefficient B takes the dimension of a ... In plane-strain problems, the assumption of ... loaded circular region; s is the radial coordinate.
A Regularized SNPOM for Stable Parameter Estimation of RBF-AR(X) Model.
Zeng, Xiaoyong; Peng, Hui; Zhou, Feng
2017-01-20
Recently, the radial basis function (RBF) network-style coefficients AutoRegressive (with exogenous inputs) [RBF-AR(X)] model identified by the structured nonlinear parameter optimization method (SNPOM) has attracted considerable interest because of its significant performance in nonlinear system modeling. However, this promising technique may occasionally confront the problem that the parameters are divergent in the optimization process, which may be a potential issue ignored by most researchers. In this paper, a regularized SNPOM, together with the regularization parameter detection technique, is presented to estimate the parameters of RBF-AR(X) models. This approach first separates the parameters of an RBF-AR(X) model into a linear parameters set and a nonlinear parameters set, and then combines a gradient-based nonlinear optimization algorithm for estimating the nonlinear parameters and the regularized least squares method for estimating the linear parameters. Several examples demonstrate that the proposed approach is effective to cope with the potential unstable problem in the parameters search process, and may also yield better or similar multistep forecasting accuracy and better robustness than the previous method.
Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models
Bates, Bryson C.
1992-01-01
A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.
Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan
2013-01-01
This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Enterprise Projects Set Risk Element Transmission Chaotic Genetic Model
Directory of Open Access Journals (Sweden)
Cunbin Li
2012-08-01
Full Text Available In order to research projects set risk transfer process and improve risk management efficiency in projects management, combining chaos theory and genetic algorithm, put forward enterprise projects set risk element transmission chaos genetic model. Using logistic chaos mapping and chebyshev chaos mapping mixture, constructed a hybrid chaotic mapping system. The steps of adopting hybrid chaos mapping for genetic operation include projects set initialization, calculation of fitness, selection, crossover and mutation operators, fitness adjustment and condition judgment. The results showed that the model can simulate enterprise projects set risk transmission process very well and it also provides the basis for the enterprise managers to make decisions.
Solar parameters for modeling interplanetary background
Bzowski, M; Tokumaru, M; Fujiki, K; Quemerais, E; Lallement, R; Ferron, S; Bochsler, P; McComas, D J
2011-01-01
The goal of the Fully Online Datacenter of Ultraviolet Emissions (FONDUE) Working Team of the International Space Science Institute in Bern, Switzerland, was to establish a common calibration of various UV and EUV heliospheric observations, both spectroscopic and photometric. Realization of this goal required an up-to-date model of spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the solar factors shaping the distribution of neutral interstellar H in the heliosphere. Presented are the solar Lyman-alpha flux and the solar Lyman-alpha resonant radiation pressure force acting on neutral H atoms in the heliosphere, solar EUV radiation and the photoionization of heliospheric hydrogen, and their evolution in time and the still hypothetical variation with heliolatitude. Further, solar wind and its evolution with solar activity is presented in the context of the charge excha...
Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models
Hori, Kentaro
2013-01-01
We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...
A termination criterion for parameter estimation in stochastic models in systems biology.
Zimmer, Christoph; Sahle, Sven
2015-11-01
Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model.
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The
An Alternative Three-Parameter Logistic Item Response Model.
Pashley, Peter J.
Birnbaum's three-parameter logistic function has become a common basis for item response theory modeling, especially within situations where significant guessing behavior is evident. This model is formed through a linear transformation of the two-parameter logistic function in order to facilitate a lower asymptote. This paper discusses an…
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; Brummelhuis, ten Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The hyperboli
A compact cyclic plasticity model with parameter evolution
DEFF Research Database (Denmark)
Krenk, Steen; Tidemann, L.
2017-01-01
, and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...
Hänel, G
1994-10-20
Complete sets of optical parameters of dry particles sampled on a Nuclepore filter are derived through interpretation of photometric data with an improved inversion technique. The parameters are the volume-extinction and absorption coefficients, the single-scattering albedo, the asymmetry parameter of the volume scattering function, the apparent complex refractive index, and the apparent soot content. They may serve as input data for solar radiation-budget considerations. Results from preliminary measurements taken in Central Europe and Italy show an extreme variability of the optical parameters. Both large regional and temporal variabilities have been observed caused by the fluctuating midlatitude weather systems and human activities.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Directory of Open Access Journals (Sweden)
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
NWP model forecast skill optimization via closure parameter variations
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.
A bottleneck model of set-specific capture.
Moore, Katherine Sledge; Weissman, Daniel H
2014-01-01
Set-specific contingent attentional capture is a particularly strong form of capture that occurs when multiple attentional sets guide visual search (e.g., "search for green letters" and "search for orange letters"). In this type of capture, a potential target that matches one attentional set (e.g. a green stimulus) impairs the ability to identify a temporally proximal target that matches another attentional set (e.g. an orange stimulus). In the present study, we investigated whether set-specific capture stems from a bottleneck in working memory or from a depletion of limited resources that are distributed across multiple attentional sets. In each trial, participants searched a rapid serial visual presentation (RSVP) stream for up to three target letters (T1-T3) that could appear in any of three target colors (orange, green, or lavender). The most revealing findings came from trials in which T1 and T2 matched different attentional sets and were both identified. In these trials, T3 accuracy was lower when it did not match T1's set than when it did match, but only when participants failed to identify T2. These findings support a bottleneck model of set-specific capture in which a limited-capacity mechanism in working memory enhances only one attentional set at a time, rather than a resource model in which processing capacity is simultaneously distributed across multiple attentional sets.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
A multivariate random-parameters Tobit model for analyzing highway crash rates by injury severity.
Zeng, Qiang; Wen, Huiying; Huang, Helai; Pei, Xin; Wong, S C
2017-02-01
In this study, a multivariate random-parameters Tobit model is proposed for the analysis of crash rates by injury severity. In the model, both correlation across injury severity and unobserved heterogeneity across road-segment observations are accommodated. The proposed model is compared with a multivariate (fixed-parameters) Tobit model in the Bayesian context, by using a crash dataset collected from the Traffic Information System of Hong Kong. The dataset contains crash, road geometric and traffic information on 224 directional road segments for a five-year period (2002-2006). The multivariate random-parameters Tobit model provides a much better fit than its fixed-parameters counterpart, according to the deviance information criteria and Bayesian R(2), while it reveals a higher correlation between crash rates at different severity levels. The parameter estimates show that a few risk factors (bus stop, lane changing opportunity and lane width) have heterogeneous effects on crash-injury-severity rates. For the other factors, the variances of their random parameters are insignificant at the 95% credibility level, then the random parameters are set to be fixed across observations. Nevertheless, most of these fixed coefficients are estimated with higher precisions (i.e., smaller variances) in the random-parameters model. Thus, the random-parameters Tobit model, which provides a more comprehensive understanding of the factors' effects on crash rates by injury severity, is superior to the multivariate Tobit model and should be considered a good alternative for traffic safety analysis.
New determination of abundances and stellar parameters for a set of weak G-band stars
Palacios, A; Masseron, T; Thévenin, F; Itam-Pasquet, J; Parthasarathy, M
2015-01-01
Weak G-band (wGb) stars are very peculiar red giants almost devoided of carbon and often mildly enriched in lithium. Despite their very puzzling abundance patterns, very few detailed spectroscopic studies existed up to a few years ago, preventing any clear understanding of the wGb phenomenon. We recently proposed the first consistent analysis of published data for 28 wGb stars and identified them as descendants of early A-type to late B-type stars, without being able to conclude on their evolutionary status or the origin of their peculiar abundance pattern. We used newly obtained high-resolution and high SNR spectra for 19 wGb stars in the southern and northern hemisphere to homogeneously derive their fundamental parameters, metallicities, as well as the spectroscopic abundances for Li, C, N, O, Na, Sr, and Ba. We also computed dedicated stellar evolution models that we used to determine the masses and to investigate the evolutionary status and chemical history of the stars in our sample. We confirm that the ...
Exploring Factor Model Parameters across Continuous Variables with Local Structural Equation Models.
Hildebrandt, Andrea; Lüdtke, Oliver; Robitzsch, Alexander; Sommer, Christopher; Wilhelm, Oliver
2016-01-01
Using an empirical data set, we investigated variation in factor model parameters across a continuous moderator variable and demonstrated three modeling approaches: multiple-group mean and covariance structure (MGMCS) analyses, local structural equation modeling (LSEM), and moderated factor analysis (MFA). We focused on how to study variation in factor model parameters as a function of continuous variables such as age, socioeconomic status, ability levels, acculturation, and so forth. Specifically, we formalized the LSEM approach in detail as compared with previous work and investigated its statistical properties with an analytical derivation and a simulation study. We also provide code for the easy implementation of LSEM. The illustration of methods was based on cross-sectional cognitive ability data from individuals ranging in age from 4 to 23 years. Variations in factor loadings across age were examined with regard to the age differentiation hypothesis. LSEM and MFA converged with respect to the conclusions. When there was a broad age range within groups and varying relations between the indicator variables and the common factor across age, MGMCS produced distorted parameter estimates. We discuss the pros of LSEM compared with MFA and recommend using the two tools as complementary approaches for investigating moderation in factor model parameters.
Fate modelling of chemical compounds with incomplete data sets
DEFF Research Database (Denmark)
Birkved, Morten; Heijungs, Reinout
2011-01-01
, and to provide simplified proxies for the more complicated “real”model relationships. In the presented study two approaches for the reduction of the data demand associated with characterization of chemical emissions in USEtoxTM are tested: The first approach yields a simplified set of mode of entry specific meta......-models with a data demand of app. 63 % (5/8) of the USEtoxTM characterization model. The second yields a simplified set of mode of entry specific meta-models with a data demand of 75 % (6/8) of the original model. The results of the study indicate that it is possible to simplify characterization models and lower...... the data demand of these models applying the presented approach. The results further indicate that the second approach relying on 75 % of the original data set provides the meta-model sets which best mimics the original model. An overall trend observed from the 75 % data demand meta-model sets...
A new level set model for multimaterial flows
Energy Technology Data Exchange (ETDEWEB)
Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering
2014-01-08
We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.
Some tests for parameter constancy in cointegrated VAR-models
DEFF Research Database (Denmark)
Hansen, Henrik; Johansen, Søren
1999-01-01
Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...
Spatio-temporal modeling of nonlinear distributed parameter systems
Li, Han-Xiong
2011-01-01
The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s
Weigand, M.; Kemna, A.
2016-06-01
Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.
Identification of hydrological model parameter variation using ensemble Kalman filter
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Fuzzy Partition Models for Fitting a Set of Partitions.
Gordon, A. D.; Vichi, M.
2001-01-01
Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)
Universally sloppy parameter sensitivities in systems biology models.
Directory of Open Access Journals (Sweden)
Ryan N Gutenkunst
2007-10-01
Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Directory of Open Access Journals (Sweden)
Guanqun eZhang
2011-11-01
Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling
Parameter estimation and investigation of a bolted joint model
Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.
2007-11-01
Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Parameter estimation of hidden periodic model in random fields
Institute of Scientific and Technical Information of China (English)
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Fernández-Guasti, M.
The quadratic iteration is mapped within a nondistributive imaginary scator algebra in 1 + 2 dimensions. The Mandelbrot set is identically reproduced at two perpendicular planes where only the scalar and one of the hypercomplex scator director components are present. However, the bound three-dimensional S set projections change dramatically even for very small departures from zero of the second hypercomplex plane. The S set exhibits a rich fractal-like boundary in three dimensions. Periodic points with period m, are shown to be necessarily surrounded by points that produce a divergent magnitude after m iterations. The scator set comprises square nilpotent elements that ineluctably belong to the bound set. Points that are square nilpotent on the mth iteration, have preperiod 1 and period m. Two-dimensional plots are presented to show some of the main features of the set. A three-dimensional rendering reveals the highly complex structure of its boundary.
Process Setting Models for the Minimization of Costs Defectives ...
African Journals Online (AJOL)
Process Setting Models for the Minimization of Costs Defectives. ... Journal Home > Vol 15, No 1 (1991) >. Log in or Register to get access to full text downloads. ... Abstract. The economy of production controls all manufacturing activities. In the ...
The Mathematical Concept of Set and the 'Collection' Model.
Fischbein, Efraim; Baltsan, Madlen
1999-01-01
Hypothesizes that various misconceptions held by students with regard to the mathematical set concept may be explained by the initial collection model. Study findings confirm the hypothesis. (Author/ASK)
Identification of parameters of discrete-continuous models
Energy Technology Data Exchange (ETDEWEB)
Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)
2015-03-10
In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.
Nonlinear model predictive control using parameter varying BP-ARX combination model
Yang, J.-F.; Xiao, L.-F.; Qian, J.-X.; Li, H.
2012-03-01
A novel back-propagation AutoRegressive with eXternal input (BP-ARX) combination model is constructed for model predictive control (MPC) of MIMO nonlinear systems, whose steady-state relation between inputs and outputs can be obtained. The BP neural network represents the steady-state relation, and the ARX model represents the linear dynamic relation between inputs and outputs of the nonlinear systems. The BP-ARX model is a global model and is identified offline, while the parameters of the ARX model are rescaled online according to BP neural network and operating data. Sequential quadratic programming is employed to solve the quadratic objective function online, and a shift coefficient is defined to constrain the effect time of the recursive least-squares algorithm. Thus, a parameter varying nonlinear MPC (PVNMPC) algorithm that responds quickly to large changes in system set-points and shows good dynamic performance when system outputs approach set-points is proposed. Simulation results in a multivariable stirred tank and a multivariable pH neutralisation process illustrate the applicability of the proposed method and comparisons of the control effect between PVNMPC and multivariable recursive generalised predictive controller are also performed.
Estimating parameters for generalized mass action models with connectivity information
Directory of Open Access Journals (Sweden)
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-01-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore water pressure were measured during the tests at multiple gravity levels. Based on the scaling laws of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40g, the optimized unsaturated parameters compared well when accurate pore water pressure measurements were included along with cumulative outflow as input data. With its capability to implement variety of instrumentations under well controlled initial and boundary conditions and to shorten testing time, the centrifuge modeling technique is attractive as an alternative experimental method that provides more freedom to set inverse problem conditions for the parameter estimation.
Fuzzy GML Modeling Based on Vague Soft Sets
Directory of Open Access Journals (Sweden)
Bo Wei
2017-01-01
Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.
Knottnerus, J A; Swaen, G M; Slangen, J J; Volovics, A; Durinck, J
1988-01-01
It is still controversial, whether moderately high haematocrit (Ht) and haemoglobin (Hb) values are risk factors for coronary heart disease. Using the computerized data-system of the Periodical Medical Examination (PME) of Phillips' International Electrical Company, a case-control study was carried out. Cases were male workers (n = 104, from 50 to 60 years of age) who had suffered a first, non-fatal myocardial infarction, and who had had a PME prior (on the average 16 months) to the occurrence of infarction. For each case two age-matched healthy controls were selected from the PME-attendancy list (= 208). For each subject information was abstracted from the PME-records about haematologic parameters and covariates (smoking, cholesterol, blood pressure, pulse-rate, weight, height, FEV5, consumption of antihypertensive agents). After dichotomizing the haematocrit and haemoglobin values at their whole sample means (0.46 l/l and 9.7 mmol/l respectively) in "low" (lower than or equal to the mean) and "high" (greater than the mean), crude odds-ratio's of 2.7 (95% CI: 1.6-4.6) and 2.1 (95% CI: 1.2-3.6) were found for Ht and Hb respectively, when comparing "high" with "low" levels. The associations between Ht and Hb, and the occurrence of myocardial infarction were still present after controlling for covariates using multiple logistic regression models, entering the continuous variables with their exact values. After adjustment, mean corpuscular volume (MCV) also appeared to be correlated with infarction. Our results confirm the hypothesis that moderately high haematocrit and--to a lesser extent--haemoglobin and MCV-values are risk factors for the occurrence of myocardial infarction.
Biological parameters used in setting captive-breeding quotas for Indonesia's breeding facilities.
Janssen, Jordi; Chng, Serene C L
2017-07-03
The commercial captive breeding of wildlife is often seen as a potential conservation tool to relieve pressure on wild populations, but laundering of wild-sourced specimens as captive bred can seriously undermine conservation efforts and provide a false sense of sustainability. Indonesia is at the center of such controversy; therefore, we examined Indonesia's captive-breeding production plan (CBPP) for 2016. We compared the biological parameters used in the CBPP with parameters in the literature and with parameters suggested by experts on each species and identified shortcomings of the CBPP. Production quotas for 99 out of 129 species were based on inaccurate or unrealistic biological parameters and production quotas deviated more than 10% from what parameters in the literature allow for. For 38 species, the quota exceeded the number of animals that can be bred based on the biological parameters (range 100-540%) calculated with equations in the CBPP. We calculated a lower reproductive output for 88 species based on published biological parameters compared with the parameters used in the CBPP. The equations used in the production plan did not appear to account for other factors (e.g., different survival rate for juveniles compared to adult animals) involved in breeding the proposed large numbers of specimens. We recommend the CBPP be adjusted so that realistic published biological parameters are applied and captive-breeding quotas are not allocated to species if their captive breeding is unlikely to be successful or no breeding stock is available. The shortcomings in the current CBPP create loopholes that mean mammals, reptiles, and amphibians from Indonesia declared captive bred may have been sourced from the wild. © 2017 Society for Conservation Biology.
Meulen, van Fokke B.; Reenalda, Jasper; Veltink, Peter H.
2013-01-01
Continuous daily-life monitoring of balance control and arm function of stroke survivors in an ambulatory setting, is essential for optimal guidance of rehabilitation. In a simulated ambulatory setting, balance and arm function of seven stroke subjects is evaluated using on-body measurement systems
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Firstly, using the damage model for rock based on Lemaitre hypothesis about strain equivalence, a new technique for measuring strength of rock micro-cells by adopting the Mohr-Coulomb criterion was developed, and a statistical damage evolution equation was established based on the property that strength of micro-cells is consistent with normal distribution function, through discussing the characteristics of random distributions for strength of micro-cells, then a statistical damage constitutive model that can simulate the full process of rock strain softening under specific confining pressure was set up. Secondly, a new method to determine the model parameters which can be applied to the situations under different confining pressures was proposed, by deeply studying the relations between the model parameters and characteristic parameters of the full stress-strain curve under different confining pressures. Therefore, a unified statistical damage constitutive model for rock softening which can reflect the effect of different confining pressures was set up. This model makes the physical property of model parameters explicit, contains only conventional mechanical parameters, and leads its application more convenient. Finally, the rationality of this model and its parameters-determining method were identified via comparative analyses between theoretical and experimental curves.
Directory of Open Access Journals (Sweden)
B. Bisselink
2016-12-01
New hydrological insights: Results indicate large discrepancies in terms of the linear correlation (r, bias (β and variability (γ between the observed and simulated streamflows when using different precipitation estimates as model input. The best model performance was obtained with products which ingest gauge data for bias correction. However, catchment behavior was difficult to be captured using a single parameter set and to obtain a single robust parameter set for each catchment, which indicate that transposing model parameters should be carried out with caution. Model parameters depend on the precipitation characteristics of the calibration period and should therefore only be used in target periods with similar precipitation characteristics (wet/dry.
Directory of Open Access Journals (Sweden)
Oksana V. Mandrikova
2015-11-01
Full Text Available The paper is devoted to new mathematical tools for ionospheric parameter analysis and anomaly discovery during ionospheric perturbations. The complex structure of processes under study, their a-priori uncertainty and therefore the complex structure of registered data require a set of techniques and technologies to perform mathematical modelling, data analysis, and to make final interpretations. We suggest a technique of ionospheric parameter modelling and analysis based on combining the wavelet transform with autoregressive integrated moving average models (ARIMA models. This technique makes it possible to study ionospheric parameter changes in the time domain, make predictions about variations, and discover anomalies caused by high solar activity and lithospheric processes prior to and during strong earthquakes. The technique was tested on critical frequency foF2 and total electron content (TEC datasets from Kamchatka (a region in the Russian Far East and Magadan (a town in the Russian Far East. The mathematical models introduced in the paper facilitated ionospheric dynamic mode analysis and proved to be efficient for making predictions with time advance equal to 5 hours. Ionospheric anomalies were found using model error estimates, those anomalies arising during increased solar activity and strong earthquakes in Kamchatka.
Towards predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
DEFF Research Database (Denmark)
Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik
2016-01-01
Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...
An automatic and effective parameter optimization method for model tuning
Directory of Open Access Journals (Sweden)
T. Zhang
2015-05-01
Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Energy Technology Data Exchange (ETDEWEB)
Pedicini, Piernicola, E-mail: ppiern@libero.it [Service of Medical Physics, Istituto di Ricovero e Cura a Carattere Scientifico Centro di Riferimento Oncologico della Basilicata, Rionero in Vulture (Italy); Strigari, Lidia [Laboratory of Medical Physics and Expert Systems, Istituto Nazionale Tumori Regina Elena, Rome (Italy); Benassi, Marcello [Service of Medical Physics, Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori, Meldola (Italy)
2013-04-01
Purpose: To determine a self-consistent set of radiobiological parameters in prostate cancer. Methods and Materials: A method to estimate intrinsic radiosensitivity (α), fractionation sensitivity (α/β), repopulation doubling time, number of clonogens, and kick-off time for accelerated repopulation of prostate cancer has been developed. Based on the generalized linear-quadratic model and without assuming the isoeffective hypothesis, the potential applications of the method were investigated using the clinical outcome of biochemical relapse-free survival recently reviewed in the literature. The strengths and limitations of the method, regarding the fitted parameters and 95% confidence intervals (CIs), are also discussed. Results: Our best estimate of α/β is 2.96 Gy (95% CI 2.41-3.53 Gy). The corresponding α value is 0.16 Gy{sup −1} (95% CI 0.14-0.18 Gy{sup −1}), which is compatible with a realistic number of clonogens: 6.5 × 10{sup 6} (95% CI 1.5 × 10{sup 6}-2.1 × 10{sup 7}). The estimated cell doubling time is 5.1 days (95% CI 4.2-7.2 days), very low if compared with that reported in the literature. This corresponds to the dose required to offset the repopulation occurring in 1 day of 0.52 Gy/d (95% CI 0.32-0.68 Gy/d). However, a long kick-off time of 31 days (95% CI 22-41 days) from the start of radiation therapy was found. Conclusion: The proposed analytic/graphic method has allowed the fitting of clinical data, providing a self-consistent set of radiobiological parameters for prostate cancer. With our analysis we confirm a low value for α/β with a correspondingly high value of intrinsic radiosensitivity, a realistic average number of clonogens, a long kick-off time for accelerated repopulation, and a surprisingly fast repopulation that suggests the involvement of subpopulations of specifically tumorigenic stem cells during continuing radiation therapy.
Developing an Integrated Set of Production Planning and Control Models
Wang, Hui
2012-01-01
This paper proposes an integrated set of production planning and control models that can be applied in the Push system (Make-to-stock). The integrated model include forecasting, aggregate planning, materials requirements planning, inventory control, capacity planning and scheduling. This integrated model solves the planning issues via three levels, which include strategic level, tactical level and operational level. The model obtains the optimal production plan for each product type in each p...
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
Institute of Scientific and Technical Information of China (English)
XIE Danmei; LIU Zhanhui; ZHANG Hengliang; YANG Changzhu; DONG Chuan
2007-01-01
Aiming at a 300 MW turbo-generator model,the sensitivity of natural torsional frequencies and modes of torsional vibration (TV) to the rotational inertia and stiffness of the turbo-generator were analyzed.Calculation results show that the variation of the rotational inertia or stiffness either of the rotor system as a whole (namely shafting) or only locally may both remarkably influence the TV characteristics of the rotor.The influence of localized variation is still notable although it is not as great as that of the rotor as a whole.The segments on the shafting,which contribute more to a certain mode of vibration,have a greater influence on the pertaining order of TV.Compared with the modal shape,a larger slope can be observed at these sections of the rotor for the particular mode.Thus,frequencies can be modulated by modifying the local construction of the rotor to make the natural TV frequency of a certain order avoid some specific value,herewith arriving at the objective of tuning.Therefore,it is very important,in the course of modeling for the purpose of studying the TV of the shafting of a turbo-set,to accurately determine the structural parameters of parts that have a relatively sensitive effect on the TV behavior.
Abu Husain, Nurulakmar; Haddad Khodaparast, Hamed; Ouyang, Huajiang
2012-10-01
Parameterisation in stochastic problems is a major issue in real applications. In addition, complexity of test structures (for example, those assembled through laser spot welds) is another challenge. The objective of this paper is two-fold: (1) stochastic uncertainty in two sets of different structures (i.e., simple flat plates, and more complicated formed structures) is investigated to observe how updating can be adequately performed using the perturbation method, and (2) stochastic uncertainty in a set of welded structures is studied by using two parameter weighting matrix approaches. Different combinations of parameters are explored in the first part; it is found that geometrical features alone cannot converge the predicted outputs to the measured counterparts, hence material properties must be included in the updating process. In the second part, statistical properties of experimental data are considered and updating parameters are treated as random variables. Two weighting approaches are compared; results from one of the approaches are in very good agreement with the experimental data and excellent correlation between the predicted and measured covariances of the outputs is achieved. It is concluded that proper selection of parameters in solving stochastic updating problems is crucial. Furthermore, appropriate weighting must be used in order to obtain excellent convergence between the predicted mean natural frequencies and their measured data.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
In this paper a stochastic volatility model is considered. That is, a log price process Y whichis given in terms of a volatility process V is studied. The latter is defined such that the logprice possesses some of the properties empirically observed by Barndorff-Nielsen & Jiang[6]. Inthe model there are two sets of unknown parameters, one set corresponding to the marginaldistribution of V and one to autocorrelation of V. Based on discrete time observations ofthe log price the authors discuss how to estimate the parameters appearing in the marginaldistribution and find the asymptotic properties.
Social Networks and Choice Set Formation in Discrete Choice Models
Directory of Open Access Journals (Sweden)
Bruno Wichmann
2016-10-01
Full Text Available The discrete choice literature has evolved from the analysis of a choice of a single item from a fixed choice set to the incorporation of a vast array of more complex representations of preferences and choice set formation processes into choice models. Modern discrete choice models include rich specifications of heterogeneity, multi-stage processing for choice set determination, dynamics, and other elements. However, discrete choice models still largely represent socially isolated choice processes —individuals are not affected by the preferences of choices of other individuals. There is a developing literature on the impact of social networks on preferences or the utility function in a random utility model but little examination of such processes for choice set formation. There is also emerging evidence in the marketplace of the influence of friends on choice sets and choices. In this paper we develop discrete choice models that incorporate formal social network structures into the choice set formation process in a two-stage random utility framework. We assess models where peers may affect not only the alternatives that individuals consider or include in their choice sets, but also consumption choices. We explore the properties of our models and evaluate the extent of “errors” in assessment of preferences, economic welfare measures and market shares if network effects are present, but are not accounted for in the econometric model. Our results shed light on the importance of the evaluation of peer or network effects on inclusion/exclusion of alternatives in a random utility choice framework.
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
DEFF Research Database (Denmark)
Andersen, Lars
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....
A New Approach for Parameter Optimization in Land Surface Model
Institute of Scientific and Technical Information of China (English)
LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin
2011-01-01
In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.
Zhang, Daojun; Cheng, Qiuming; Agterberg, Frits; Chen, Zhijun
2016-03-01
In this paper Excel VBA is used for batch calculation in Local Singularity Analysis (LSA), which is for the information extracting from different kinds of geoscience data. Capabilities and advantages of a new module called Batch Tool for Local Singularity Index Mapping (BTLSIM) are: (1) batch production of series of local singularity maps with different settings of local window size, shape and orientation parameters; (2) local parameter optimization based on statistical tests; and (3) provision of extra output layers describing how spatial changes induced by parameter optimization are related to spatial structure of the original input layers.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
Estimating winter wheat phenological parameters: Implications for crop modeling
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
Directory of Open Access Journals (Sweden)
M. Nurhaniza
2016-01-01
Full Text Available The quality of the machining is measured from surface finished and it is considered as the most important aspect in composite machining. An appropriate and optimum machining parameters setting is crucial during machining operation in order to enhance the surface quality. The objective of this research is to analyze the effect of machining parameters on the surface quality of CFRP-Aluminium in CNC end milling operation with PCD tool. The milling parameters evaluated are spindle speed, feed rate, and depth of cut. The L9 Taguchi orthogonal arrays, signal-to-noise (S/N ratio, and analysis of variance (ANOVA are employed to analyze the effect of these cutting parameters. The analysis of the results indicates that the optimal cutting parameters combination for good surface finish is high cutting speed, low feed rate, and low depth of cut.
Retrospective forecast of ETAS model with daily parameters estimate
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
SIM parameter-based security for mobile e-commerce settings
National Research Council Canada - National Science Library
Francisco Orlando Martínez Pabón; Jaime Caicedo Guerrero; Rodrigo Hernández Cuenca; Oscar Mauricio Caicedo Rendón; Javier Alexander Hurtado Guaca
2010-01-01
Security requirements are more demanding in the e-commerce domain. However, mobile e -commerce settings not only insist on security requirements, they also require balance between security levels and hardware and usability device ability...
Spatial variability of the parameters of a semi-distributed hydrological model
de Lavenne, Alban; Thirel, Guillaume; Andréassian, Vazken; Perrin, Charles; Ramos, Maria-Helena
2016-05-01
Ideally, semi-distributed hydrologic models should provide better streamflow simulations than lumped models, along with spatially-relevant water resources management solutions. However, the spatial distribution of model parameters raises issues related to the calibration strategy and to the identifiability of the parameters. To analyse these issues, we propose to base the evaluation of a semi-distributed model not only on its performance at streamflow gauging stations, but also on the spatial and temporal pattern of the optimised value of its parameters. We implemented calibration over 21 rolling periods and 64 catchments, and we analysed how well each parameter is identified in time and space. Performance and parameter identifiability are analysed comparatively to the calibration of the lumped version of the same model. We show that the semi-distributed model faces more difficulties to identify stable optimal parameter sets. The main difficulty lies in the identification of the parameters responsible for the closure of the water balance (i.e. for the particular model investigated, the intercatchment groundwater flow parameter).
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Dynamic Modeling and Parameter Identification of Power Systems
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
@@ The generator, the excitation system, the steam turbine and speed governor, and the load are the so called four key models of power systems. Mathematical modeling and parameter identification for the four key models are of great importance as the basis for designing, operating, and analyzing power systems.
Dynamic Load Model using PSO-Based Parameter Estimation
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
A systematic study of Lyman-Alpha transfer through outflowing shells: Model parameter estimation
Gronke, Max; Dijkstra, Mark
2015-01-01
Outflows promote the escape of Lyman-$\\alpha$ (Ly$\\alpha$) photons from dusty interstellar media. The process of radiative transfer through interstellar outflows is often modelled by a spherically symmetric, geometrically thin shell of gas that scatters photons emitted by a central Ly$\\alpha$ source. Despite its simplified geometry, this `shell model' has been surprisingly successful at reproducing observed Ly$\\alpha$ line shapes. In this paper we perform automated line fitting on a set of noisy simulated shell model spectra, in order to determine whether degeneracies exist between the different shell model parameters. While there are some significant degeneracies, we find that most parameters are accurately recovered, especially the HI column density ($N_{\\rm HI}$) and outflow velocity ($v_{\\rm exp}$). This work represents an important first step in determining how the shell model parameters relate to the actual physical properties of Ly$\\alpha$ sources. To aid further exploration of the parameter space, we ...
Robust non-rigid point set registration using student's-t mixture model.
Directory of Open Access Journals (Sweden)
Zhiyong Zhou
Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.
Long, Jinyi; Yu, Zhuliang
2010-01-01
Parameter setting plays an important role for improving the performance of a brain computer interface (BCI). Currently, parameters (e.g. channels and frequency band) are often manually selected. It is time-consuming and not easy to obtain an optimal combination of parameters for a BCI. In this paper, motor imagery-based BCIs are considered, in which channels and frequency band are key parameters. First, a semi-supervised support vector machine algorithm is proposed for automatically selecting a set of channels with given frequency band. Next, this algorithm is extended for joint channel-frequency selection. In this approach, both training data with labels and test data without labels are used for training a classifier. Hence it can be used in small training data case. Finally, our algorithms are applied to a BCI competition data set. Our data analysis results show that these algorithms are effective for selection of frequency band and channels when the training data set is small. PMID:21886673
Uncertainty in the relationship between flow and parameters in models of pollutant transport
Romanowicz, R.; Osuch, M.; Wallis, S.; Napiórkowski, J. J.
2009-04-01
Fluorescent dye-tracer studies are usually performed under steady-state flow conditions. However, the model parameters, estimated using the tracer data, depend on the discharges. This paper investigates uncertainties in the relationship between discharges and parameters of a transient storage (TS) and an aggregated dead zone (ADZ) models. We apply a Bayesian statistical approach to derive the cumulative distribution of a range of model parameters conditioned on discharges. The data consist of eighteen tracer concentration profiles taken at different flow values at two cross-sections from the Murray Burn, a stream flowing through the Heriot-Watt University Campus at Riccarton in Edinburgh, Scotland. A number of studies have been reported of the dependence of TS and ADZ model parameters on discharge but there are very few studies on the uncertainty related to that parameterization, which is the aim of this work. As the TS model is purely deterministic and the ADZ model is stochastic, different approaches are required to estimate the uncertainty in the dependence of their parameters on flow. The Generalised Likelihood Uncertainty Estimation (GLUE) approach is suitable for the deterministic models and is therefore applied to the TS model. The method applies Monte Carlo sampling of parameter space used in multiple simulations of a deterministic transient storage model. The relationship between model parameters and flow has the form of a nonlinear regression model based on multiple random realizations of the deterministic transport model. The parameterization of that relationship and its introduction into the TS model allow for the conditioning of parameter estimates and as a result, also model predictions on the whole set of available observations. In the case of the ADZ model, the approach is based on Monte Carlo sampling of ADZ model parameters, taking into account heteroscedastic variance of the observations and estimates of the covariance of the model parameters
Calibration of back-analysed model parameters for landslides using classification statistics
Cepeda, Jose; Henderson, Laura
2016-04-01
Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been
Comparing spatial and temporal transferability of hydrological model parameters
Patil, Sopan D.; Stieglitz, Marc
2015-06-01
Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Devak, Manjula; Dhanya, Ct
2017-04-01
The scrupulous selection of critical spatial and temporal resolution and the evaluation of optimum values for various model parameters are essential aspects in any hydrological modelling study. The accurate assessment of various model parameters is vitally important for the detailed and complete representation of the various physical processes illustrating land-atmosphere interaction. Studies in the past have taken up various auto-calibration and parameter transferability schemes to address these; but the heterogeneity of calibration parameters across grids is greatly ignored often. In many studies, heterogeneity is often compromised through the usual interpolation approaches adopted across grids. In the present study, we focus to analyze the response of a catchment by adopting a heterogeneous and homogeneous parameter distribution in the hydrological model. The semi-distributed hydrological model, Variable Infiltration Capacity (VIC-3L) model, which offers sub-grid variability in soil moisture storage capacity and vegetation classes, is used for this comparison. Nine model parameters are selected for calibrating the VIC-3L model, namely variable infiltration curve parameter (infilt), maximum velocity of base flow for each grid cells (DSmax), fraction of DSmax where non-linear base flow begins (DS, fraction of maximum soil moisture where non-linear base flow occurs (WS), depth of 2nd soil layer (D2), depth of 3rd soil layer (D3), exponent used in baseflow curve (c), advection coefficient (C) and diffusion coefficient (D). Latin-Hypercube sampling is adopted to sample these nine parameters. In homogenous approach, the traditional way of constant soil parameter distribution (HoSCP) is adopted to prepare the parameter set. While, in heterogeneous approach, grid-to-grid variability is ensured by constructing a Heterogeneous Soil Calibration Parameter (HeSCP) set through systematic sampling of already sampled set. The sampling size is made equal to the number of grids
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
An Effective Parameter Screening Strategy for High Dimensional Watershed Models
Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.
2014-12-01
Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.
Model checking abstract state machines with answer set programming
2006-01-01
Answer Set Programming (ASP) is a logic programming paradigm that has been shown as a useful tool in various application areas due to its expressive modelling language. These application areas include Bourided Model Checking (BMC). BMC is a verification technique that is recognized for its strong ability of finding errors in computer systems. To apply BMC, a system needs to be modelled in a formal specification language, such as the widely used formalism of Abstract State Machines (ASMs). In ...
The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing
Eyben, Florian; Scherer, Klaus; Schuller, Björn; Sundberg, Johan; André, Elisabeth; Busso, Carlos; Devillers, Laurence; Epps, Julien; Laukka, Petri; Narayanan, Shrikanth; Truong, Khiet
2015-01-01
Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure co
Wójciak, Karolina M; Krajmas, Paweł; Solska, Elżbieta; Dolatowski, Zbigniew J
2015-01-01
The aim of the study was to evaluate the potential of acid whey and set milk as a marinade in the traditional production of fermented eye round. Studies involved assaying pH value, water activity (aw), oxidation-reduction potential and TBARS value, colour parameters in CIE system (L*, a*, b*), assaying the number of lactic acid bacteria and certain pathogenic bacteria after ripening process and after 60-day storing in cold storage. Sensory analysis and analysis of the fatty acids profile were performed after completion of the ripening process. Analysis of pH value in the products revealed that application of acid whey to marinate beef resulted in increased acidity of ripening eye round (5.14). The highest value of the colour parameter a* after ripening process and during storage was observed in sample AW (12.76 and 10.07 respectively), the lowest on the other hand was observed in sample SM (10.06 and 7.88 respectively). The content of polyunsaturated fatty acids (PUFA) was higher in eye round marinated in acid whey by approx. 4% in comparison to other samples. Application of acid whey to marinade beef resulted in increased share of red colour in general colour tone as well as increased oxidative stability of the product during storage. It also increased the content of polyunsaturated fatty acids (PUFA) in the product. All model products had high content of lactic acid bacteria and there were no pathogenic bacteria such as: L. monocytogenes, Y. enterocolitica, S. aureus, Clostridium sp.
Energy Technology Data Exchange (ETDEWEB)
Hamimid, M., E-mail: Hamimid_mourad@hotmail.com [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Mimoune, S.M., E-mail: s.m.mimoune@mselab.org [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Feliachi, M., E-mail: mouloud.feliachi@univ-nantes.fr [IREENA-IUT, CRTT, 37 Boulevard de l' Universite, BP 406, 44602 Saint Nazaire Cedex (France)
2012-07-01
In this present work, the minor hysteresis loops model based on parameters scaling of the modified Jiles-Atherton model is evaluated by using judicious expressions. These expressions give the minor hysteresis loops parameters as a function of the major hysteresis loop ones. They have exponential form and are obtained by parameters identification using the stochastic optimization method 'simulated annealing'. The main parameters influencing the data fitting are three parameters, the pinning parameter k, the mean filed parameter {alpha} and the parameter which characterizes the shape of anhysteretic magnetization curve a. To validate this model, calculated minor hysteresis loops are compared with measured ones and good agreements are obtained.
Energy Technology Data Exchange (ETDEWEB)
Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)
2005-09-21
The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.
Chiba, T; Suto, Y; Chiba, Takashi; Sugiyama, Naoshi; Suto, Yasushi
1994-01-01
We have performed the most comprehensive predictions of the temperature fluctuations \\dtt in the primeval isocurvature baryon models to see whether or not the models are consistent with the recent data on the cosmic microwave background anisotropies. More specifically, we computed the \\dtt corresponding to the experimental set-up by the South-Pole and the Owens Valley experiments as well as the COBE satellite. The amplitudes of the predicted \\dtt are normalized by means of the COBE 10$^\\circ$ data. The resulting constraints on the models are presented on $n - \\Omega_b$ plane in the case of $\\lambda_0=1-\\Omega_b$ (flat models) and $\\lambda_0=0$ (open models), where $n$ is the primordial spectral index of entropy fluctuations and $\\Omega_b$ is the present baryon density parameter. Our results imply that the PIB models cannot be reconciled with the current observations for any reasonable set of cosmological parameters.
MODELING OF FUEL SPRAY CHARACTERISTICS AND DIESEL COMBUSTION CHAMBER PARAMETERS
Directory of Open Access Journals (Sweden)
G. M. Kukharonak
2011-01-01
Full Text Available The computer model for coordination of fuel spray characteristics with diesel combustion chamber parameters has been created in the paper. The model allows to observe fuel sprays develоpment in diesel cylinder at any moment of injection, to calculate characteristics of fuel sprays with due account of a shape and dimensions of a combustion chamber, timely to change fuel injection characteristics and supercharging parameters, shape and dimensions of a combustion chamber. Moreover the computer model permits to determine parameters of holes in an injector nozzle that provides the required fuel sprays characteristics at the stage of designing a diesel engine. Combustion chamber parameters for 4ЧН11/12.5 diesel engine have been determined in the paper.
Mathematically Modeling Parameters Influencing Surface Roughness in CNC Milling
Directory of Open Access Journals (Sweden)
Engin Nas
2012-01-01
Full Text Available In this study, steel AISI 1050 is subjected to process of face milling in CNC milling machine and such parameters as cutting speed, feed rate, cutting tip, depth of cut influencing the surface roughness are investigated experimentally. Four different experiments are conducted by creating different combinations for parameters. In conducted experiments, cutting tools, which are coated by PVD method used in forcing steel and spheroidal graphite cast iron are used. Surface roughness values, which are obtained by using specified parameters with cutting tools, are measured and correlation between measured surface roughness values and parameters is modeled mathematically by using curve fitting algorithm. Mathematical models are evaluated according to coefficients of determination (R2 and the most ideal one is suggested for theoretical works. Mathematical models, which are proposed for each experiment, are estipulated.
Regionalization parameters of conceptual rainfall-runoff model
Osuch, M.
2003-04-01
Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
An experimental methodology for a fuzzy set preference model
Turksen, I. B.; Willson, Ian A.
1992-01-01
A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate
Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets
Directory of Open Access Journals (Sweden)
Loreto M. Valenzuela
2016-01-01
Full Text Available Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R2>0.78 for test set but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error. Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R2=0.78 for test set, with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.
Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...
Determination of modeling parameters for power IGBTs under pulsed power conditions
Energy Technology Data Exchange (ETDEWEB)
Dale, Gregory E [Los Alamos National Laboratory; Van Gordon, Jim A [U. OF MISSOURI; Kovaleski, Scott D [U. OF MISSOURI
2010-01-01
While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.
Parameter sensitivity analysis of stochastic models provides insights into cardiac calcium sparks.
Lee, Young-Seon; Liu, Ona Z; Hwang, Hyun Seok; Knollmann, Bjorn C; Sobie, Eric A
2013-03-05
We present a parameter sensitivity analysis method that is appropriate for stochastic models, and we demonstrate how this analysis generates experimentally testable predictions about the factors that influence local Ca(2+) release in heart cells. The method involves randomly varying all parameters, running a single simulation with each set of parameters, running simulations with hundreds of model variants, then statistically relating the parameters to the simulation results using regression methods. We tested this method on a stochastic model, containing 18 parameters, of the cardiac Ca(2+) spark. Results show that multivariable linear regression can successfully relate parameters to continuous model outputs such as Ca(2+) spark amplitude and duration, and multivariable logistic regression can provide insight into how parameters affect Ca(2+) spark triggering (a probabilistic process that is all-or-none in a single simulation). Benchmark studies demonstrate that this method is less computationally intensive than standard methods by a factor of 16. Importantly, predictions were tested experimentally by measuring Ca(2+) sparks in mice with knockout of the sarcoplasmic reticulum protein triadin. These mice exhibit multiple changes in Ca(2+) release unit structures, and the regression model both accurately predicts changes in Ca(2+) spark amplitude (30% decrease in model, 29% decrease in experiments) and provides an intuitive and quantitative understanding of how much each alteration contributes to the result. This approach is therefore an effective, efficient, and predictive method for analyzing stochastic mathematical models to gain biological insight.
Fate modelling of chemical compounds with incomplete data sets
DEFF Research Database (Denmark)
Birkved, Morten; Heijungs, Reinout
2011-01-01
in an approximate way. The idea is that not all data needed in a multi-media fate and exposure model are completely independent and equally important, but that there are physical-chemical and biological relationships between sets of chemical properties. A statistical model is constructed to underpin this assumption......, and to provide simplified proxies for the more complicated “real”model relationships. In the presented study two approaches for the reduction of the data demand associated with characterization of chemical emissions in USEtoxTM are tested: The first approach yields a simplified set of mode of entry specific meta......-models with a data demand of app. 63 % (5/8) of the USEtoxTM characterization model. The second yields a simplified set of mode of entry specific meta-models with a data demand of 75 % (6/8) of the original model. The results of the study indicate that it is possible to simplify characterization models and lower...
MODELING PARAMETERS OF ARC OF ELECTRIC ARC FURNACE
Directory of Open Access Journals (Sweden)
R.N. Khrestin
2015-08-01
Full Text Available Purpose. The aim is to build a mathematical model of the electric arc of arc furnace (EAF. The model should clearly show the relationship between the main parameters of the arc. These parameters determine the properties of the arc and the possibility of optimization of melting mode. Methodology. We have built a fairly simple model of the arc, which satisfies the above requirements. The model is designed for the analysis of electromagnetic processes arc of varying length. We have compared the results obtained when testing the model with the results obtained on actual furnaces. Results. During melting in real chipboard under the influence of changes in temperature changes its properties arc plasma. The proposed model takes into account these changes. Adjusting the length of the arc is the main way to regulate the mode of smelting chipboard. The arc length is controlled by the movement of the drive electrode. The model reflects the dynamic changes in the parameters of the arc when changing her length. We got the dynamic current-voltage characteristics (CVC of the arc for the different stages of melting. We got the arc voltage waveform and identified criteria by which possible identified stage of smelting. Originality. In contrast to the previously known models, this model clearly shows the relationship between the main parameters of the arc EAF: arc voltage Ud, amperage arc id and length arc d. Comparison of the simulation results and experimental data obtained from real particleboard showed the adequacy of the constructed model. It was found that character of change of magnitude Md, helps determine the stage of melting. Practical value. It turned out that the model can be used to simulate smelting in EAF any capacity. Thus, when designing the system of control mechanism for moving the electrode, the model takes into account changes in the parameters of the arc and it can significantly reduce electrode material consumption and energy consumption
Environmental Transport Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis
The Numerical Modeling of Transient Regimes of Diesel Generator Sets
Directory of Open Access Journals (Sweden)
Cristian Roman
2010-07-01
Full Text Available This paper deals with the numerical modeling of a diesel generator set used as amain energy source in isolated areas and as a back-up energy source in the case ofrenewable energy systems. The numerical models are developed using a Matlab/Simulinksoftware package and they prove to be a powerful tool for the computer aided design ofcomplex hybrid power systems. Several operation regimes of the equipment are studied.The numerical study is completed with experimental measurements on a Kipor type dieselelectricgenerator set.
Using Set Model for Learning Addition of Integers
Directory of Open Access Journals (Sweden)
Umi Puji Lestari
2015-07-01
Full Text Available This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the use of set models that is packaged in activity of recording of financial transactions in two color chips and card game can help students to understand the concept of zero pair, addition with the same colored chips, and cancellation strategy.
Inhalation Exposure Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models
Directory of Open Access Journals (Sweden)
Erol Egrioglu
2011-01-01
Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.
Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen
2014-06-01
Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation
Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm
Directory of Open Access Journals (Sweden)
Deok-Soon An
2013-01-01
Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.
Construction of constant-Q viscoelastic model with three parameters
Institute of Scientific and Technical Information of China (English)
SUN Cheng-yu; YIN Xing-yao
2007-01-01
The popularly used viscoelastic models have some shortcomings in describing relationship between quality factor (Q) and frequency, which is not consistent with the observation data. Based on the theory of viscoelasticity, a new approach to construct constant-Q viscoelastic model in given frequency band with three parameters is developed. The designed model describes the frequency-independence feature of quality factor very well, and the effect of viscoelasticity on seismic wave field can be studied relatively accurate in theory with this model. Furthermore, the number of required parameters in this model has been reduced fewer than that of other constant-Q models, this can simplify the solution of the viscoelastic problems to some extent. At last, the accuracy and application range have been analyzed through numerical tests. The effect of viscoelasticity on wave propagation has been briefly illustrated through the change of frequency spectra and waveform in several different viscoelastic models.
Schmidt, Patricia; Hannam, Mark
2014-01-01
Gravitational waves (GWs) emitted by generic black-hole binaries show a rich structure that directly reflects the complex dynamics introduced by the precession of the orbital plane, which poses a real challenge to the development of generic waveform models. Recent progress in modelling these signals relies on an approximate decoupling between the non-precessing secular inspiral and a precession-induced rotation. However, the latter depends in general on all physical parameters of the binary which makes modelling efforts as well as understanding parameter-estimation prospects prohibitively complex. Here we show that the dominant precession effects can be captured by a reduced set of spin parameters. Specifically, we introduce a single \\emph{effective precession spin} parameter, $\\chi_p$, which is defined from the spin components that lie in the orbital plane at some (arbitrary) instant during the inspiral. We test the efficacy of this parameter by considering binary inspiral configurations specified by the phy...
Mirror symmetry for two-parameter models, 1
Candelas, Philip; Font, A; Katz, S; Morrison, Douglas Robert Ogston; Candelas, Philip; Ossa, Xenia de la; Font, Anamaria; Katz, Sheldon; Morrison, David R.
1994-01-01
We study, by means of mirror symmetry, the quantum geometry of the K\\"ahler-class parameters of a number of Calabi-Yau manifolds that have $b_{11}=2$. Our main interest lies in the structure of the moduli space and in the loci corresponding to singular models. This structure is considerably richer when there are two parameters than in the various one-parameter models that have been studied hitherto. We describe the intrinsic structure of the point in the (compactification of the) moduli space that corresponds to the large complex structure or classical limit. The instanton expansions are of interest owing to the fact that some of the instantons belong to families with continuous parameters. We compute the Yukawa couplings and their expansions in terms of instantons of genus zero. By making use of recent results of Bershadsky et al. we compute also the instanton numbers for instantons of genus one. For particular values of the parameters the models become birational to certain models with one parameter. The co...
Ravens, Ursula; Katircioglu-Öztürk, Deniz; Wettwer, Erich; Christ, Torsten; Dobrev, Dobromir; Voigt, Niels; Poulet, Claire; Loose, Simone; Simon, Jana; Stein, Agnes; Matschke, Klaus; Knaut, Michael; Oto, Emre; Oto, Ali; Güvenir, H Altay
2015-03-01
Ex vivo recorded action potentials (APs) in human right atrial tissue from patients in sinus rhythm (SR) or atrial fibrillation (AF) display a characteristic spike-and-dome or triangular shape, respectively, but variability is huge within each rhythm group. The aim of our study was to apply the machine-learning algorithm ranking instances by maximizing the area under the ROC curve (RIMARC) to a large data set of 480 APs combined with retrospectively collected general clinical parameters and to test whether the rules learned by the RIMARC algorithm can be used for accurately classifying the preoperative rhythm status. APs were included from 221 SR and 158 AF patients. During a learning phase, the RIMARC algorithm established a ranking order of 62 features by predictive value for SR or AF. The model was then challenged with an additional test set of features from 28 patients in whom rhythm status was blinded. The accuracy of the risk prediction for AF by the model was very good (0.93) when all features were used. Without the seven AP features, accuracy still reached 0.71. In conclusion, we have shown that training the machine-learning algorithm RIMARC with an experimental and clinical data set allows predicting a classification in a test data set with high accuracy. In a clinical setting, this approach may prove useful for finding hypothesis-generating associations between different parameters.
A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization
Lizhi Cui; Zhihao Ling; Josiah Poon; Poon, Simon K.; Junbin Gao; Paul Kwan
2014-01-01
This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO), for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD) data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM) model is designed to transform these p...
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
DEFF Research Database (Denmark)
Andersen, Lars
2007-01-01
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...
Muscle parameters for musculoskeletal modelling of the human neck
Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.
2011-01-01
Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
DEFF Research Database (Denmark)
Andersen, Lars
2007-01-01
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
Muscle parameters for musculoskeletal modelling of the human neck
Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.
2011-01-01
Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment
Geometry parameters for musculoskeletal modelling of the shoulder system
Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H
1992-01-01
A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of geomet
Precise correction to parameter ρ in the littlest Higgs model
Institute of Scientific and Technical Information of China (English)
Farshid Tabbak; F.Farnoudi
2008-01-01
In this paper tree-level violation of weak isospin parameter,ρ in the flame of the littlest Higgs model is studied.The potentially large deviation from the standard model prediction for the ρ in terms of the littlest Higgs model parameters is calculated.The maximum value for ρ for f ＝ 1 TeV,c ＝ 0.05,c'＝ 0.05and v'= 1.5 GeV is ρ = 1.2973 which means a large enhancement than the SM.
Directory of Open Access Journals (Sweden)
Daniel Mora-Melia
2016-04-01
Full Text Available This paper presents a modified Shuffled Frog Leaping Algorithm (SFLA applied to the design of water distribution networks. Generally, one of the major disadvantages of the traditional SFLA is the high number of parameters that need to be calibrated for proper operation of the algorithm. A method for calibrating these parameters is presented and applied to the design of three benchmark medium-sized networks widely known in the literature (Hanoi, New York Tunnel, and GoYang. For each of the problems, over 35,000 simulations were conducted. Then, a statistical analysis was performed, and the relative importance of each of the parameters was analyzed to achieve the best possible configuration of the modified SFLA. The main conclusion from this study is that not all of the original SFL algorithm parameters are important. Thus, the fraction of frogs in the memeplex q can be eliminated, while the other parameters (number of evolutionary steps Ns, number of memeplexes m, and number of frogs n may be set to constant values that run optimally for all medium-sized networks. Furthermore, the modified acceleration parameter C becomes the key parameter in the calibration process, vastly improving the results provided by the original SFLA.
Comparative Analysis of Visco-elastic Models with Variable Parameters
Directory of Open Access Journals (Sweden)
Silviu Nastac
2010-01-01
Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.
Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction
Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad
2010-05-01
Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters
A software for parameter estimation in dynamic models
Directory of Open Access Journals (Sweden)
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Parameter Estimation of Photovoltaic Models via Cuckoo Search
Directory of Open Access Journals (Sweden)
Jieming Ma
2013-01-01
Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Automatic Determination of the Conic Coronal Mass Ejection Model Parameters
Pulkkinen, A.; Oates, T.; Taktakishvili, A.
2009-01-01
Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
SBMLSimulator: A Java Tool for Model Simulation and Parameter Estimation in Systems Biology
Directory of Open Access Journals (Sweden)
Alexander Dörr
2014-12-01
Full Text Available The identification of suitable model parameters for biochemical reactions has been recognized as a quite difficult endeavor. Parameter values from literature or experiments can often not directly be combined in complex reaction systems. Nature-inspired optimization techniques can find appropriate sets of parameters that calibrate a model to experimentally obtained time series data. We present SBMLsimulator, a tool that combines the Systems Biology Simulation Core Library for dynamic simulation of biochemical models with the heuristic optimization framework EvA2. SBMLsimulator provides an intuitive graphical user interface with various options as well as a fully-featured command-line interface for large-scale and script-based model simulation and calibration. In a parameter estimation study based on a published model and artificial data we demonstrate the capability of SBMLsimulator to identify parameters. SBMLsimulator is useful for both, the interactive simulation and exploration of the parameter space and for the large-scale model calibration and estimation of uncertain parameter values.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
CADLIVE optimizer: web-based parameter estimation for dynamic models
Directory of Open Access Journals (Sweden)
Inoue Kentaro
2012-08-01
Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.
A new level set model for cell image segmentation
Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun
2011-02-01
In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.
A new level set model for cell image segmentation
Institute of Scientific and Technical Information of China (English)
Ma Jing-Feng; Hou Kai; Bao Shang-Lian; Chen Chun
2011-01-01
In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.
Reference physiological parameters for pharmacodynamic modeling of liver cancer
Energy Technology Data Exchange (ETDEWEB)
Travis, C.C.; Arms, A.D.
1988-01-01
This document presents a compilation of measured values for physiological parameters used in pharamacodynamic modeling of liver cancer. The physiological parameters include body weight, liver weight, the liver weight/body weight ratio, and number of hepatocytes. Reference values for use in risk assessment are given for each of the physiological parameters based on analyses of valid measurements taken from the literature and other reliable sources. The proposed reference values for rodents include sex-specific measurements for the B6C3F{sub 1}, mice and Fishcer 344/N, Sprague-Dawley, and Wistar rats. Reference values are also provided for humans. 102 refs., 65 tabs.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
X-Parameter Based Modelling of Polar Modulated Power Amplifiers
DEFF Research Database (Denmark)
Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel
2013-01-01
X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Modelling of Water Turbidity Parameters in a Water Treatment Plant
Directory of Open Access Journals (Sweden)
A. S. KOVO
2005-01-01
Full Text Available The high cost of chemical analysis of water has necessitated various researches into finding alternative method of determining portable water quality. This paper is aimed at modelling the turbidity value as a water quality parameter. Mathematical models for turbidity removal were developed based on the relationships between water turbidity and other water criteria. Results showed that the turbidity of water is the cumulative effect of the individual parameters/factors affecting the system. A model equation for the evaluation and prediction of a clarifier’s performance was developed:Model: T = T0(-1.36729 + 0.037101∙10λpH + 0.048928t + 0.00741387∙alkThe developed model will aid the predictive assessment of water treatment plant performance. The limitations of the models are as a result of insufficient variable considered during the conceptualization.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
Kanai, Takayuki; Kadoya, Noriyuki; Ito, Kengo; Onozato, Yusuke; Cho, Sang Yong; Kishi, Kazuma; Dobashi, Suguru; Umezawa, Rei; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi
2014-11-01
Deformable image registration (DIR) is fundamental technique for adaptive radiotherapy and image-guided radiotherapy. However, further improvement of DIR is still needed. We evaluated the accuracy of B-spline transformation-based DIR implemented in elastix. This registration package is largely based on the Insight Segmentation and Registration Toolkit (ITK), and several new functions were implemented to achieve high DIR accuracy. The purpose of this study was to clarify whether new functions implemented in elastix are useful for improving DIR accuracy. Thoracic 4D computed tomography images of ten patients with esophageal or lung cancer were studied. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomical landmarks that had been manually identified. DIR between peak-inhale and peak-exhale images was performed with four types of parameter settings. The first one represents original ITK (Parameter 1). The second employs the new function of elastix (Parameter 2), and the third was created to verify whether new functions improve DIR accuracy while keeping computational time (Parameter 3). The last one partially employs a new function (Parameter 4). Registration errors for these parameter settings were calculated using the manually determined landmark pairs. 3D registration errors with standard deviation over all cases were 1.78 (1.57), 1.28 (1.10), 1.44 (1.09) and 1.36 (1.35) mm for Parameter 1, 2, 3 and 4, respectively, indicating that the new functions are useful for improving DIR accuracy, even while maintaining the computational time, and this B-spline-based DIR could be used clinically to achieve high-accuracy adaptive radiotherapy. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Directory of Open Access Journals (Sweden)
Márton Edelényi
2011-11-01
Full Text Available Analysing and stability improving methods were studied for the examination of relationships between tree growth and meteorological factors according to our requirements. In order to explore more complex relations from primary data sets secondary data series had to be systematically transformed and a uniform analysis process was developed for their investigation. The structure of the Systematic Transformation Analysing Method (STAM has three main components. The first module derives input data from the original series without any essential changes. The transformation unit produces secondary data series using a moving window technique. The thirds component performs the examinations. STAM also allows the application in several other research fields.
The bilateral trade model in a discrete setting
Flesch, J.; Schröder, M.J.W.; Vermeulen, A.J.
2013-01-01
We consider a bilateral trade model in which both players have a finite number of possible valuations. The seller's valuation and the buyer's valuation for the object are private information, but the independent beliefs about these valuations are common knowledge. In this setting, we provide a
Modelling fruit set, fruit growth and dry matter partitioning
Marcelis, L.F.M.; Heuvelink, E.
1999-01-01
This paper discusses how fruit set, fruit growth and dry matter partitioning can be simulated by models where sink strength (assimilate demand) and source strength (assimilate supply) are the key variables. Although examples are derived from experiments on fruit vegetables such as tomato, sweet pepp
Empirical validation data sets for double skin facade models
DEFF Research Database (Denmark)
Kalyanova, Olena; Jensen, Rasmus Lund; Heiselberg, Per
2008-01-01
During recent years application of double skin facades (DSF) has greatly increased. However, successful application depends heavily on reliable and validated models for simulation of the DSF performance and this in turn requires access to high quality experimental data. Three sets of accurate emp...
A preference-based multiple-source rough set model
M.A. Khan; M. Banerjee
2010-01-01
We propose a generalization of Pawlak’s rough set model for the multi-agent situation, where information from an agent can be preferred over that of another agent of the system while deciding membership of objects. Notions of lower/upper approximations are given which depend on the knowledge base of
MODIS/COMBINED MCD43A1 BRDF-Albedo Model Parameters 16-Day L3 Global 500m
U.S. Geological Survey, Department of the Interior — The MODerate-resolution Imaging Spectroradiometer (MODIS) BRDF/Albedo Model Parameters product (MCD43A1) contains three-dimensional (3D) data sets providing users...
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Light-Front Spin-1 Model: Parameters Dependence
Mello, Clayton S; de Melo, J P B C; Frederico, T
2015-01-01
We study the structure of the $\\rho$-meson within a light-front model with constituent quark degrees of freedom. We calculate electroweak static observables: magnetic and quadrupole moments, decay constant and charge radius. The prescription used to compute the electroweak quantities is free of zero modes, which makes the calculation implicitly covariant. We compare the results of our model with other ones found in the literature. Our model parameters give a decay constant close to the experimental one.
Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold
Pradhan, A; Singh, C B
2006-01-01
FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.
A fuzzy set preference model for market share analysis
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share
DEFF Research Database (Denmark)
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Energy Technology Data Exchange (ETDEWEB)
Shaheen, Husam I.; Rashed, Ghamgeen I.; Cheng, S.J. [Electric Power Security and High Efficiency Lab, Department of Electrical Engineering, Huazhong University of Science and Technology, Wuhan 430074, Hubei (China)
2011-01-15
This paper presents a new approach based on Differential Evolution (DE) technique to find out the optimal placement and parameter setting of Unified Power Flow Controller (UPFC) for enhancing power system security under single line contingencies. Firstly, we perform a contingency analysis and ranking process to determine the most severe line outage contingencies considering line overloads and bus voltage limit violations as a Performance Index. Secondly, we apply DE technique to find out the optimal location and parameter setting of UPFC under the determined contingency scenarios. To verify our proposed approach, we perform simulations on an IEEE 14-bus and an IEEE 30-bus power systems. The results we have obtained indicate that installing UPFC in the location optimized by DE can significantly enhance the security of power system by eliminating or minimizing the overloaded lines and the bus voltage limit violations. (author)
A measure on the set of compact Friedmann-Lemaitre-Robertson-Walker models
Energy Technology Data Exchange (ETDEWEB)
Roukema, Boudewijn F [Torun Centre for Astronomy, Nicolaus Copernicus University, ul. Gagarina 11, 87-100 Torun (Poland); Blanloeil, Vincent [IRMA, Departement de Mathematiques, Universite de Strasbourg, 7 rue Rene Descartes, 67084 Strasbourg, Cedex (France)
2010-12-21
Compact, flat Friedmann-Lemaitre-Robertson-Walker (FLRW) models have recently regained interest as a good fit to the observed cosmic microwave background temperature fluctuations. However, it is generally thought that a globally, exactly flat FLRW model is theoretically improbable. Here, in order to obtain a probability space on the set F of compact, comoving, 3-spatial sections of FLRW models, a physically motivated hypothesis is proposed, using the density parameter {Omega} as a derived rather than fundamental parameter. We assume that the processes that select the 3-manifold also select a global mass-energy and a Hubble parameter. The requirement that the local and global values of {Omega} are equal implies a range in {Omega} that consists of a single real value for any 3-manifold. Thus, the obvious measure over F is the discrete measure. Hence, if the global mass-energy and Hubble parameter are a function of 3-manifold choice among compact FLRW models, then probability spaces parametrized by {Omega} do not, in general, give a zero probability of a flat model. Alternatively, parametrization by a spatial size parameter, the injectivity radius r{sub inj}, suggests the Lebesgue measure. In this case, the probability space over the injectivity radius implies that flat models occur almost surely (a.s.), in the sense of probability theory, and non-flat models a.s. do not occur.
Centrifuge modeling of one-step outflow tests for unsaturated parameter estimations
Directory of Open Access Journals (Sweden)
H. Nakajima
2006-05-01
Full Text Available Centrifuge modeling of one-step outflow tests were carried out using a 2-m radius geotechnical centrifuge, and the cumulative outflow and transient pore pressure were measured during the tests at multiple gravity levels. Based on the scaling law of centrifuge modeling, the measurements generally showed reasonable agreement with prototype data calculated from forward simulations with input parameters determined from standard laboratory tests. The parameter optimizations were examined for three different combinations of input data sets using the test measurements. Within the gravity level examined in this study up to 40 g, the optimized unsaturated parameters compared well when accurate pore pressure measurements were included along with cumulative outflow as input data. The centrifuge modeling technique with its capability to implement variety of instrumentations under well controlled initial and boundary conditions, shortens testing time and can provide significant information for the parameter estimation procedure.
A Uniform Set of DAV Atmospheric Parameters to Enable Differential Seismology
Fuchs, Joshua T.; Dunlap, Bart H.; Clemens, J. Christopher; Meza, Jesus; Dennihy, Erik
2017-01-01
We have observed over 130 hydrogen-atmosphere pulsating white dwarfs (DAVs) using the Goodman Spectrograph on the SOAR Telescope. This includes all known DAVs south of +10° declination as well as those observed by the K2 mission. Because it employs a single instrument, our sample allows us to carefully explore systematics in the determination of atmospheric parameters, Teff and log(g). While some systematics show changes of up to 300 K in Teff and 0.06 in log(g), the relative position of each star in the Teff-log(g) plane is more secure. These relative positions, combined with differences in pulsation spectra, will allow us to investigate relative differences in the structure and composition of over 130 DAVs through differential seismology.
Design Parameters for Evaluating Light Settings and Light Atmosphere in Hospital Wards
DEFF Research Database (Denmark)
Stidsen, Lone; Kirkegaard, Poul Henning; Fisker, Anna Marie
2010-01-01
When constructing and designing Danish hospitals for the future, patients, staff and guests are in focus. It is found important to have a starting point in healing architecture and create an environment with knowledge of users sensory and functionally needs and looks at how hospital wards can...... support patients’ experience or maybe even have a positive influence on the recovery process. Thus at a general level, it is a crucial task to investigate how aspects such as the design of the environment, arts, lights, sounds can support and improve the patients’ recovery rate and the satisfaction...... of staff and guests in the future hospital. This paper is based on Böhmes G. concept of atmosphere dealing with the effect of light in experiencing atmosphere, and the importance having a holistic approach when designing a pleasurable light atmosphere. It shows important design parameters for pleasurable...
Solar Model Parameters and Direct Measurements of Solar Neutrino Fluxes
Bandyopadhyay, A; Goswami, S; Petcov, S T; Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati
2006-01-01
We explore a novel possibility of determining the solar model parameters, which serve as input in the calculations of the solar neutrino fluxes, by exploiting the data from direct measurements of the fluxes. More specifically, we use the rather precise value of the $^8B$ neutrino flux, $\\phi_B$ obtained from the global analysis of the solar neutrino and KamLAND data, to derive constraints on each of the solar model parameters on which $\\phi_B$ depends. We also use more precise values of $^7Be$ and $pp$ fluxes as can be obtained from future prospective data and discuss whether such measurements can help in reducing the uncertainties of one or more input parameters of the Standard Solar Model.
IP-Sat: Impact-Parameter dependent Saturation model; revised
Rezaeian, Amir H; Van de Klundert, Merijn; Venugopalan, Raju
2013-01-01
In this talk, we present a global analysis of available small-x data on inclusive DIS and exclusive diffractive processes, including the latest data from the combined HERA analysis on reduced cross sections within the Impact-Parameter dependent Saturation (IP-Sat) Model. The impact-parameter dependence of dipole amplitude is crucial in order to have a unified description of both inclusive and exclusive diffractive processes. With the parameters of model fixed via a fit to the high-precision reduced cross-section, we compare model predictions to data for the structure functions, the longitudinal structure function, the charm structure function, exclusive vector mesons production and Deeply Virtual Compton Scattering (DVCS). Excellent agreement is obtained for the processes considered at small x in a wide range of Q^2.
QCD-inspired determination of NJL model parameters
Springer, Paul; Rechenberger, Stefan; Rennecke, Fabian
2016-01-01
The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Modelling of intermittent microwave convective drying: parameter sensitivity
Directory of Open Access Journals (Sweden)
Zhang Zhijun
2017-06-01
Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Modelling of intermittent microwave convective drying: parameter sensitivity
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Zayane, Chadia
2014-06-01
In this paper, we address a special case of state and parameter estimation, where the system can be put on a cascade form allowing to estimate the state components and the set of unknown parameters separately. Inspired by the nonlinear Balloon hemodynamic model for functional Magnetic Resonance Imaging problem, we propose a hierarchical approach. The system is divided into two subsystems in cascade. The state and input are first estimated from a noisy measured signal using an adaptive observer. The obtained input is then used to estimate the parameters of a linear system using the modulating functions method. Some numerical results are presented to illustrate the efficiency of the proposed method.
An implementation of continuous genetic algorithm in parameter estimation of predator-prey model
Windarto
2016-03-01
Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.
Comparing spatial and temporal transferability of hydrological model parameters
Patil, Sopan; Stieglitz, Marc
2015-04-01
Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal
Czaplik, Michael; Biener, Ingeborg; Leonhardt, Steffen; Rossaint, Rolf
2014-03-01
Since mechanical ventilation can cause harm to lung tissue it should be as protective as possible. Whereas numerous options exist to set ventilator parameters, an adequate monitoring is lacking up to date. The Electrical Impedance Tomography (EIT) provides a non-invasive visualization of ventilation which is relatively easy to apply and commercially available. Although there are a number of published measures and parameters derived from EIT, it is not clear how to use EIT to improve clinical outcome of e.g. patients suffering from acute respiratory distress syndrome (ARDS), a severe disease with a high mortality rate. On the one hand, parameters should be easy to obtain, on the other hand clinical algorithms should consider them to optimize ventilator settings. The so called Global inhomogeneity (GI) index bases on the fact that ARDS is characterized by an inhomogeneous injury pattern. By applying positive endexpiratory pressures (PEEP), homogeneity should be attained. In this study, ARDS was induced by a double hit procedure in six pigs. They were randomly assigned to either the EIT or the control group. Whereas in the control group the ARDS network table was used to set the PEEP according to the current inspiratory oxygen fraction, in the EIT group the GI index was calculated during a decremental PEEP trial. PEEP was kept when GI index was lowest. Interestingly, PEEP was significantly higher in the EIT group. Additionally, two of these animals died ahead of the schedule. Obviously, not only homogeneity of ventilation distribution matters but also limitation of over-distension.
Directory of Open Access Journals (Sweden)
Marcus Hirtl
2011-02-01
Full Text Available Two comprehensive data sets are used to investigate the ability of the Lagrangian particle diffusion model LASAT to simulate the dispersion of plumes emitted from tunnel jets. The data sets differ in traffic volume, tunnel geometry and temporal resolution of the measurement data. In the framework of the measurement campaign at the Ehrentalerbergtunnel in Carinthia, seven trace gas experiments with SF6 were conducted in 2001. Short term averages (30 minutes of concentrations were measured at 25 air quality stations in the vicinity of the tunnel portal during different meteorological conditions. In general the dispersion of the plume depends on the meteorological conditions (wind, stability and the modification of the flow by terrain and buildings in the vicinity of the portal. The influence of the exit velocity of the tunnel jet must also be considered as well as the difference between the exhaust temperature and the ambient air temperature to account for buoyancy effects. The temperature increment cannot be provided directly as input parameter to LASAT as in case of the tunnel jet velocity although it is an important parameter. With LASAT, the model user can adjust two empirical input parameters to the tunnel specifications. Relationships between these model parameters and the tunnel parameters are developed in this study. They are based on the data set Ehrentalerbergtunnel and provide reasonable input values for the model user. The simulations with LASAT show that the model is able to reproduce the location and the height of the observed peak concentrations very well. The second data set was generated from January to October 2001 at the Kaisermühlentunnel in Vienna. Measurements of NOx at four air quality stations near the portal are available. Because of uncertainties in the emission data caused by vehicle counts in only one direction, only long term averages of concentrations are compared for this data set. The functions between tunnel and
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
J-A Hysteresis Model Parameters Estimation using GA
Directory of Open Access Journals (Sweden)
Bogomir Zidaric
2005-01-01
Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.
A new estimate of the parameters in linear mixed models
Institute of Scientific and Technical Information of China (English)
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Models wagging the dog: are circuits constructed with disparate parameters?
Nowotny, Thomas; Szücs, Attila; Levi, Rafael; Selverston, Allen I
2007-08-01
In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.
Do land parameters matter in large-scale hydrological modelling?
Gudmundsson, Lukas; Seneviratne, Sonia I.
2013-04-01
Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land
Maheshwari, Arpit; Dumitrescu, Mihaela Aneta; Destro, Matteo; Santarelli, Massimo
2016-03-01
Battery models are riddled with incongruous values of parameters considered for validation. In this work, thermally coupled electrochemical model of the pouch is developed and discharge tests on a LiFePO4 pouch cell at different discharge rates are used to optimize the LiFePO4 battery model by determining parameters for which there is no consensus in literature. A discussion on parameter determination, selection and comparison with literature values has been made. The electrochemical model is a P2D model, while the thermal model considers heat transfer in 3D. It is seen that even with no phase change considered for LiFePO4 electrode, the model is able to simulate the discharge curves over a wide range of discharge rates with a single set of parameters provided a dependency of the radius of the LiFePO4 electrode on discharge rate. The approach of using a current dependent radius is shown to be equivalent to using a current dependent diffusion coefficient. Both these modelling approaches are a representation of the particle size distribution in the electrode. Additionally, the model has been thermally validated, which increases the confidence level in the selection of values of parameters.
Modelling spatial vagueness based on type-2 fuzzy set
Institute of Scientific and Technical Information of China (English)
DU Guo-ning; ZHU Zhong-ying
2006-01-01
The modelling and formal characterization of spatial vagueness plays an increasingly important role in the implementation of Geographic Information System (GIS). The concepts involved in spatial objects of GIS have been investigated and acknowledged as being vague and ambiguous. Models and methods which describe and handle fuzzy or vague (rather than crisp or determinate) spatial objects, will be more necessary in GIS. This paper proposes a new method for modelling spatial vagueness based on type-2 fuzzy set, which is distinguished from the traditional type-1 fuzzy methods and more suitable for describing and implementing the vague concepts and objects in GIS.
Institute of Scientific and Technical Information of China (English)
Lukas Graber; Diomar Infante; Michael Steurer; William W. Brey
2011-01-01
Careful analysis of transients in shipboard power systems is important to achieve long life times of the com ponents in future all-electric ships. In order to accomplish results with high accuracy, it is recommended to validate cable models as they have significant influence on the amplitude and frequency spectrum of voltage transients. The authors propose comparison of model and measurement using scattering parameters. They can be easily obtained from measurement and simulation and deliver broadband information about the accuracy of the model. The measurement can be performed using a vector network analyzer. The process to extract scattering parameters from simulation models is explained in detail. Three different simulation models of a 5 kV XLPE power cable have been validated. The chosen approach delivers an efficient tool to quickly estimate the quality of a model.
Inhalation Exposure Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This
Considerations for parameter optimization and sensitivity in climate models.
Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E
2010-12-14
Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Development of regional parameter estimation equations for a macroscale hydrologic model
Abdulla, Fayez A.; Lettenmaier, Dennis P.
1997-10-01
A methodology for developing regional parameter estimation equations, designed for application to continental scale river basins, is described. The approach, which is applied to the two-layer Variable Infiltration Capacity (VIC-2L) land surface hydrologic model, uses a set of 34 unregulated calibration or "training" catchments (drainage areas 10 2-10 4 km 2) distributed throughout the Arkansas-Red River basin of the south central U.S. For each of these catchments, parameters were determined by: a) prior estimation of two of the model parameters (saturated hydraulic conductivity and pore size distribution index) from the U.S. Soil Conservation Service State Soil Geographic Data Base (STATSGO) data base; and b) estimation of the remaining seven parameters via a search procedure that minimizes the sum of squares of differences between predicted and observed streamflow. The catchment parameters were then related to 11 ancillary distributed land surface characteristics extracted from STATSGO, and 17 variables derived from station meteorological data. The seven regression equations explained from 54 to 76% of the variance of the parameters. The most frequently occurring ancillary variables were the average permeability, saturated hydraulic conductivity, and SCS hydrologic Group B (typically soils with moderately high infiltration rates) fraction derived from STATSGO, and the average temperature and standard deviation of fall precipitation. The method was tested by comparing simulations using the regional (regression equation) parameters for six unregulated catchments not in the parameter estimation set. The model performance using the regional parameters was quite good for most of the calibration and validation catchments, which were humid and semi-humid. The model did not perform as well for the smaller number of arid to semi-arid catchments.
Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3
Shivarama, Ravishankar; Fahrenthold, Eric P.
2004-01-01
A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
DEFF Research Database (Denmark)
Smets, Barth F.; Lardon, Laurent
2009-01-01
of the outcomes to the various plasmid dynamic parameters. For our analysis, we developed a set of user-friendly MatLab® routines, which are deposited in the public domain. We hope that the availability of these routines will encourage the computationally untrained microbiologist to make use of these mathematical...... models. Finally, further permutations, as well as limitations of these mass action models in view of the structured complexity of most microbial systems are addressed....
Multi-Objective Calibration of Hydrological Model Parameters Using MOSCEM-UA
Wang, Yuhui; Lei, Xiaohui; Jiang, Yunzhong; Wang, Hao
2010-05-01
In the past two decades, many evolutionary algorithms have been adopted in the auto-calibration of hydrological model such as NSGA-II, SCEM, etc., some of which has shown ideal performance. In this article, a detailed hydrological model auto-calibration algorithm Multi-objective Shuffled Complex Evolution Metropolis (MOSCEM-UA) has been introduced to carry out auto-calibration of hydrological model in order to clarify the equilibrium and the uncertainty of model parameters. The development and the implement flow chart of the advanced multi-objective algorithm (MOSCEM-UA) were interpreted in detail. Hymod, a conceptual hydrological model depending on Moore's concept, was then introduced as a lumped Rain-Runoff simulation approach with several principal parameters involved. The five important model parameters subjected to calibration includes maximum storage capacity, spatial variability of the soil moisture capacity, flow distributing factor between slow and quick reservoirs as well as slow tank and quick tank distribution factor. In this study, a test case on the up-stream area of KuanCheng hydrometric station in Haihe basin was studied to verify the performance of calibration. Two objectives including objective for high flow process and objective for low flow process are chosen in the process of calibration. The results emphasized that the interrelationship between objective functions could be described in correlation Pareto Front by using MOSCEM-UA. The Pareto Front can be draw after the iteration. Further more, post range of parameters corresponding to Pareto sets could also be drawn to identify the prediction range of the model. Then a set of balanced parameter was chosen to validate the model and the result showed an ideal prediction. Meanwhile, the correlation among parameters and their effects on the model performance could also be achieved.
Model calibration and parameter estimation for environmental and water resource systems
Sun, Ne-Zheng
2015-01-01
This three-part book provides a comprehensive and systematic introduction to the development of useful models for complex systems. Part 1 covers the classical inverse problem for parameter estimation in both deterministic and statistical frameworks, Part 2 is dedicated to system identification, hyperparameter estimation, and model dimension reduction, and Part 3 considers how to collect data and construct reliable models for prediction and decision-making. For the first time, topics such as multiscale inversion, stochastic field parameterization, level set method, machine learning, global sensitivity analysis, data assimilation, model uncertainty quantification, robust design, and goal-oriented modeling, are systematically described and summarized in a single book from the perspective of model inversion, and elucidated with numerical examples from environmental and water resources modeling. Readers of this book will not only learn basic concepts and methods for simple parameter estimation, but also get famili...
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series
Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik
2016-06-01
Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model
A framework for scalable parameter estimation of gene circuit models using structural information
Kuwahara, Hiroyuki
2013-06-21
Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.
Low-dimensional modeling of a driven cavity flow with two free parameters
DEFF Research Database (Denmark)
Jørgensen, Bo Hoffmann; Sørensen, Jens Nørkær; Brøns, Morten
2003-01-01
parameters to appear in the inhomogeneous boundary conditions without the addition of any constraints. This is necessary because both the driving lid and the rotating rod are controlled simultaneously. Apparently, the results reported for this model are the first to be obtained for a low-dimensional model......-dimensional models. SPOD is capable of transforming data organized in different sets separately while still producing orthogonal modes. A low-dimensional model is constructed and used for analyzing bifurcations occurring in the flow in the lid-driven cavity with a rotating rod. The model allows one of the free...
Joint Dynamics Modeling and Parameter Identification for Space Robot Applications
Directory of Open Access Journals (Sweden)
Adenilson R. da Silva
2007-01-01
Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.
Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes
Yang, Xin-She; Luan, Tao; Koziel, Slawomir
2014-01-01
Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.
The influences of model parameters on the characteristics of memristors
Institute of Scientific and Technical Information of China (English)
Zhou Jing; Huang Da
2012-01-01
As the fourth passive circuit component,a memristor is a nonlinear resistor that can "remember" the amount of charge passing through it.The characteristic of "remembering" the charge and non-volatility makes memristors great potential candidates in many fields.Nowadays,only a few groups have the ability to fabricate memristors,and most researchers study them by theoretic analysis and simulation.In this paper,we first analyse the theoretical base and characteristics of memristors,then use a simulation program with integrated circuit emphasis as our tool to simulate the theoretical model of memristors and change the parameters in the model to see the influence of each parameter on the characteristics.Our work supplies researchers engaged in memristor-based circuits with advice on how to choose the proper parameters.
Energy Technology Data Exchange (ETDEWEB)
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Modeling of Electrocardiogram Signals Using Predefined Signature and Envelope Vector Sets
Directory of Open Access Journals (Sweden)
Yarman B Sıddık
2007-01-01
Full Text Available A novel method is proposed to model ECG signals by means of "predefined signature and envelope vector sets (PSEVS." On a frame basis, an ECG signal is reconstructed by multiplying three model parameters, namely, predefined signature vector ," "predefined envelope vector ," and frame-scaling coefficient (FSC. All the PSVs and PEVs are labeled and stored in their respective sets to describe the signal in the reconstruction process. In this case, an ECG signal frame is modeled by means of the members of these sets labeled with indices and and the frame-scaling coefficient, in the least mean square sense. The proposed method is assessed through the use of percentage root-mean-square difference (PRD and visual inspection measures. Assessment results reveal that the proposed method provides significant data compression ratio (CR with low-level PRD values while preserving diagnostic information. This fact significantly reduces the bandwidth of communication in telediagnosis operations.
Hertog, Maarten L. A. T. M.; Scheerlinck, Nico; Nicolaï, Bart M.
2009-01-01
When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.
Conditioning rainfall-runoff model parameters to reduce prediction uncertainty in ungauged basins
Visessri, S.; McIntyre, N.; Maksimovic, C.
2012-12-01
Conditioning rainfall-runoff model parameters in ungauged catchments in Thailand presents problems common to ungauged basins involving data availability, data quality, and rainfall-runoff model suitability, which all contribute to prediction uncertainty. This paper attempts to improve the estimation of streamflow in ungauged basins and reduce associated uncertainties using the approaches of conditioning the prior parameter space. 35 catchments from the upper Ping River basin, Thailand are selected as a case study. The catchments have a range of attributes e.g. catchment sizes 20-6350 km2, elevations 632-1529 m above sea level. and annual rainfall 846-1447 mm/year. For each catchment, three indices - rainfall-runoff elasticity, base flow index and runoff coefficient - are calculated using the observed rainfall-runoff data and regression equations relating these indices to the catchment attributes are identified. Uncertainty in expected indices is defined by the regression error distribution, approximated by a Gaussian model. The IHACRES model is applied for simulating streamflow. The IHACRES parameters are randomly sampled from their presumed prior parameter space. For each sampled parameter set, the streamflow and hence the three indices are modelled. The parameter sets are conditioned on the probability distributions of the regionalised indices, allowing ensemble predictions to be made. The objective function, NSE, calculated for daily and weekly time steps from the water years 1995-2000, is used to assess model performance. Ability to capture observed streamflow and the precision of the estimate is evaluated using reliability and sharpness measures. Similarity in modelled and expected indices contributes to good objective function values. Using only the regionalised runoff coefficient to condition the model yields better NSE values compared to using either only the rainfall-runoff elasticity or only the base flow index. Conditioning on the runoff coefficient
Calculation of Thermodynamic Parameters for Freundlich and Temkin Isotherm Models
Institute of Scientific and Technical Information of China (English)
ZHANGZENGQIANG; ZHANGYIPING; 等
1999-01-01
Derivation of the Freundlich and Temkin isotherm models from the kinetic adsorption/desorption equations was carried out to calculate their thermodynamic equilibrium constants.The calculation formulase of three thermodynamic parameters,the standard molar Gibbs free energy change,the standard molar enthalpy change and the standard molar entropy change,of isothermal adsorption processes for Freundlich and Temkin isotherm models were deduced according to the relationship between the thermodynamic equilibrium constants and the temperature.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parabolic problems with parameters arising in evolution model for phytromediation
Sahmurova, Aida; Shakhmurov, Veli
2012-12-01
The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).
Lumped-parameter Model of a Bucket Foundation
DEFF Research Database (Denmark)
Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten
2009-01-01
As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...
Improved parameter estimation for hydrological models using weighted object functions
Stein, A.; Zaadnoordijk, W.J.
1999-01-01
This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
Institute of Scientific and Technical Information of China (English)
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Modeling and simulation of HTS cables for scattering parameter analysis
Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June
2016-11-01
Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.
Setting development goals using stochastic dynamical system models
Nicolis, Stamatios C.; Bali Swain, Ranjula; Sumpter, David J. T.
2017-01-01
The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers. PMID:28241057
Affective Computing Model for the Set Pair Users on Twitter
Directory of Open Access Journals (Sweden)
Chunying Zhang
2013-01-01
Full Text Available Affective computing is the calculation about sentiment, sentiment generated and the aspects of affecting the sentiment. However, the different factors often cause the uncertainty of sentiment expression of the users. Today twitter as the information media of real-time and timely has become better sentiment expression vector for users themselves. Therefore, in allusion to the diversity of sentiment form of twitter information to express sentiment, this paper constructs affective computing model, starting from the differences of the constituted form of Twitter based on set pair theory to make analysis and calculation for user sentiment, from the text, emoticon, picture information and other multi-angle to analyze the positive, negative and uncertain emotion of the users for the signal twitter, consolidating the weight of various parts in emotional information, building hierarchical set pair affective computing model for twitter users, to offer more favorable data support for the relevant departments and businesses.
Setting development goals using stochastic dynamical system models.
Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T
2017-01-01
The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.
Energy Technology Data Exchange (ETDEWEB)
Soares, T. A. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Daura, X. [Universitat Autonoma de Barcelona, InstitucioCatalana de Recerca i Estudis Avancats and Institut de Biotecnologia i Biomedicina (Spain); Oostenbrink, C. [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland); Smith, L. J. [University of Oxford, Oxford Centre for Molecular Sciences, Central Chemistry Laboratory (United Kingdom); Gunsteren, W. F. van [ETH Hoenggerberg Zuerich, Laboratory of Physical Chemistry (Switzerland)], E-mail: wfvgn@igc.phys.chem.ethz.ch
2004-12-15
The quality of molecular dynamics (MD) simulations of proteins depends critically on the biomolecular force field that is used. Such force fields are defined by force-field parameter sets, which are generally determined and improved through calibration of properties of small molecules against experimental or theoretical data. By application to large molecules such as proteins, a new force-field parameter set can be validated. We report two 3.5 ns molecular dynamics simulations of hen egg white lysozyme in water applying the widely used GROMOS force-field parameter set 43A1 and a new set 45A3. The two MD ensembles are evaluated against NMR spectroscopic data NOE atom-atom distance bounds, {sup 3}J{sub NH{alpha}} and {sup 3}J{sub {alpha}}{sub {beta}} coupling constants, and {sup 1}5N relaxation data. It is shown that the two sets reproduce structural properties about equally well. The 45A3 ensemble fulfills the atom-atom distance bounds derived from NMR spectroscopy slightly less well than the 43A1 ensemble, with most of the NOE distance violations in both ensembles involving residues located in loops or flexible regions of the protein. Convergence patterns are very similar in both simulations atom-positional root-mean-square differences (RMSD) with respect to the X-ray and NMR model structures and NOE inter-proton distances converge within 1.0-1.5 ns while backbone {sup 3}J{sub HN{alpha}}-coupling constants and {sup 1}H- {sup 1}5N order parameters take slightly longer, 1.0-2.0 ns. As expected, side-chain {sup 3}J{sub {alpha}}{sub {beta}}-coupling constants and {sup 1}H- {sup 1}5N order parameters do not reach full convergence for all residues in the time period simulated. This is particularly noticeable for side chains which display rare structural transitions. When comparing each simulation trajectory with an older and a newer set of experimental NOE data on lysozyme, it is found that the newer, larger, set of experimental data agrees as well with each of the
Mathematical Modelling with Fuzzy Sets of Sustainable Tourism Development
Nenad Stojanović
2011-01-01
In the first part of the study we introduce fuzzy sets that correspond to comparative indicators for measuring sustainable development of tourism. In the second part of the study it is shown, on the base of model created, how one can determine the value of sustainable tourism development in protected areas based on the following established groups of indicators: to assess the economic status, to assess the impact of tourism on the social component, to assess the impact of tourism on cultural ...
Evaluation of some infiltration models and hydraulic parameters
Energy Technology Data Exchange (ETDEWEB)
Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.
2010-07-01
The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.
Agricultural and Environmental Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Tempel, Elmo; Kipper, Rain; Tenjes, Peeter
2012-01-01
Is it realistic to recover the 3D structure of galaxies from their images? To answer this question, we generate a sample of idealised model galaxies consisting of a disc-like component and a spheroidal component (bulge) with varying luminosities, inclination angles and structural parameters, and component density following the Einasto distribution. We simulate these galaxies as if observed in the SDSS project through ugriz filters, thus gaining a set of images of galaxies with known intrinsic properties. We remodel the galaxies with a 3D galaxy modelling procedure and compare the restored parameters to the initial ones in order to determine the uncertainties of the models. Down to the r-band limiting magnitude 18, errors of the restored integral luminosities and colour indices remain within 0.05 mag and errors of the luminosities of individual components within 0.2 mag. Accuracy of the restored bulge-to-disc ratios (B/D) is within 40% in most cases, and becomes even worse for galaxies with low B/D due to diff...
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2007-05-07
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Directory of Open Access Journals (Sweden)
Anthony Shannon
2006-04-01
Full Text Available This paper describes an approach to adaptive optimal control in the presence of model parameter calculation difficulties. This has wide application in a variety of biological and biomedical research and clinical problems. To illustrate the techniques, the approach is applied to the development and implementation of a practical adaptive insulin infusion algorithm for use with patients with Type 1 diabetes mellitus.
Soil-Related Input Parameters for the Biosphere Model
Energy Technology Data Exchange (ETDEWEB)
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure
Model and parameter uncertainty in IDF relationships under climate change
Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.
2015-05-01
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Eifler, Tim; Krause, Elisabeth; Dodelson, Scott; Zentner, Andrew; Hearin, Andrew; Gnedin, Nickolay
2014-05-28
Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulated likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.
Application of an Evolutionary Algorithm for Parameter Optimization in a Gully Erosion Model
Energy Technology Data Exchange (ETDEWEB)
Rengers, Francis; Lunacek, Monte; Tucker, Gregory
2016-06-01
Herein we demonstrate how to use model optimization to determine a set of best-fit parameters for a landform model simulating gully incision and headcut retreat. To achieve this result we employed the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an iterative process in which samples are created based on a distribution of parameter values that evolve over time to better fit an objective function. CMA-ES efficiently finds optimal parameters, even with high-dimensional objective functions that are non-convex, multimodal, and non-separable. We ran model instances in parallel on a high-performance cluster, and from hundreds of model runs we obtained the best parameter choices. This method is far superior to brute-force search algorithms, and has great potential for many applications in earth science modeling. We found that parameters representing boundary conditions tended to converge toward an optimal single value, whereas parameters controlling geomorphic processes are defined by a range of optimal values.
Taylor–Socolar Hexagonal Tilings as Model Sets
Directory of Open Access Journals (Sweden)
Jeong-Yup Lee
2012-12-01
Full Text Available The Taylor–Socolar tilings are regular hexagonal tilings of the plane but are distinguished in being comprised of hexagons of two colors in an aperiodic way. We place the Taylor–Socolar tilings into an algebraic setting, which allows one to see them directly as model sets and to understand the corresponding tiling hull along with its generic and singular parts. Although the tilings were originally obtained by matching rules and by substitution, our approach sets the tilings into the framework of a cut and project scheme and studies how the tilings relate to the corresponding internal space. The centers of the entire set of tiles of one tiling form a lattice Q in the plane. If XQ denotes the set of all Taylor–Socolar tilings with centers on Q, then XQ forms a natural hull under the standard local topology of hulls and is a dynamical system for the action of Q.The Q-adic completion Q of Q is a natural factor of XQ and the natural mapping XQ → Q is bijective except at a dense set of points of measure 0 in /Q. We show that XQ consists of three LI classes under translation. Two of these LI classes are very small, namely countable Q-orbits in XQ. The other is a minimal dynamical system, which maps surjectively to /Q and which is variously 2 : 1, 6 : 1, and 12 : 1 at the singular points. We further develop the formula of what determines the parity of the tiles of a tiling in terms of the coordinates of its tile centers. Finally we show that the hull of the parity tilings can be identified with the hull XQ; more precisely the two hulls are mutually locally derivable.
Reduced parameter model on trajectory tracking data with applications
Institute of Scientific and Technical Information of China (English)
王正明; 朱炬波
1999-01-01
The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Prediction of mortality rates using a model with stochastic parameters
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
Mark-recapture models with parameters constant in time.
Jolly, G M
1982-06-01
The Jolly-Seber method, which allows for both death and immigration, is easy to apply but often requires a larger number of parameters to be estimated tha would otherwise be necessary. If (i) survival rate, phi, or (ii) probability of capture, p, or (iii) both phi and p can be assumed constant over the experimental period, models with a reduced number of parameters are desirable. In the present paper, maximum likelihood (ML) solutions for these three situations are derived from the general ML equations of Jolly [1979, in Sampling Biological Populations, R. M. Cormack, G. P. Patil and D. S. Robson (eds), 277-282]. A test is proposed for heterogeneity arising from a breakdown of assumptions in the general Jolly-Seber model. Tests for constancy of phi and p are provided. An example is given, in which these models are fitted to data from a local butterfly population.
Qiu, Jueqin; Xu, Haisong
2016-09-01
In this paper, a camera response formation model is proposed to accurately predict the responses of images captured under various exposure settings. Differing from earlier works that estimated the camera relative spectral sensitivity, our model constructs the physical spectral sensitivity curves and device-dependent parameters that convert the absolute spectral radiances of target surfaces to the camera readout responses. With this model, the camera responses to miscellaneous combinations of surfaces and illuminants could be accurately predicted. Thus, creating an "imaging simulator" by using the colorimetric and photometric research based on the cameras would be of great convenience.
Singularity of Some Software Reliability Models and Parameter Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Parameter Identifiability of Ship Manoeuvring Modeling Using System Identification
Directory of Open Access Journals (Sweden)
Weilin Luo
2016-01-01
Full Text Available To improve the feasibility of system identification in the prediction of ship manoeuvrability, several measures are presented to deal with the parameter identifiability in the parametric modeling of ship manoeuvring motion based on system identification. Drift of nonlinear hydrodynamic coefficients is explained from the point of view of regression analysis. To diminish the multicollinearity in a complicated manoeuvring model, difference method and additional signal method are employed to reconstruct the samples. Moreover, the structure of manoeuvring model is simplified based on correlation analysis. Manoeuvring simulation is performed to demonstrate the validity of the measures proposed.
Leonov, G. A.; Kuznetsov, N. V.; Solovyeva, E. P.
2016-02-01
Oscillations in turbogenerator sets, which consist of a synchronous generator, a hydraulic turbine, and an automatic speed regulator, are investigated. This study was motivated by the emergency that took place at the Sayano-Shushenskaya Hydroelectric Power Station in 2009. During modeling of the parameters of turbogenerator sets of the Sayano-Shushenskaya Hydroelectric Power Station, the ranges corresponding to undesired oscillation regimes were determined. These ranges agree with the results of the full-scale tests of the hydropower units of the Sayano-Shushenskaya Hydroelectric Power Station performed in 1988.
Pouillot, Régis; Lubran, Meryl B
2011-06-01
Predictive microbiology models are essential tools to model bacterial growth in quantitative microbial risk assessments. Various predictive microbiology models and sets of parameters are available: it is of interest to understand the consequences of the choice of the growth model on the risk assessment outputs. Thus, an exercise was conducted to explore the impact of the use of several published models to predict Listeria monocytogenes growth during food storage in a product that permits growth. Results underline a gap between the most studied factors in predictive microbiology modeling (lag, growth rate) and the most influential parameters on the estimated risk of listeriosis in this scenario (maximum population density, bacterial competition). The mathematical properties of an exponential dose-response model for Listeria accounts for the fact that the mean number of bacteria per serving and, as a consequence, the highest achievable concentrations in the product under study, has a strong influence on the estimated expected number of listeriosis cases in this context.
Directory of Open Access Journals (Sweden)
Chia-Hsuan Lee
2015-11-01
Full Text Available The goal of this study was to investigate the parameters affecting exergame performance using multi-scale entropy analysis, with the aim of informing the design of exergames for personalized balance training. Test subjects’ center of pressure (COP displacement data were recorded during exergame play to examine their balance ability at varying difficulty levels of a balance-based exergame; the results of a multi-scale entropy-based analysis were then compared to traditional COP indicators. For games involving static posture frames, variation in posture frame travel time was found to significantly affect the complexity of both the anterior-posterior (MSE-AP and medio-lateral (MSE-ML components of balancing movements. However, in games involving dynamic posture frames, only MSE-AP was found to be sensitive to the variation of parameters, namely foot-lifting speed. Findings were comparable to the COP data published by Sun et al., indicating that the use of complexity data is a feasible means of distinguishing between different parameter sets and of understanding how human design considerations must be taken into account in exergame development. Not only can this method be used as another assessment index in the future, it can also be used in the optimization of parameters within the virtual environments of exergames.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel.
Gelleszun, Marlene; Kreye, Phillip; Meon, Günter
2017-10-01
We introduce the developed lexicographic calibration strategy to circumvent the imbalance between sophisticated hydrological models in combination with complex optimisation algorithms. The criteria for the evaluation of the approach were (i) robustness and transferability of the resulting parameters, (ii) goodness-of-fit criteria in calibration and validation and (iii) time-efficiency. An order of preference was determined prior to the calibration and the parameters were separated into groups for a stepwise calibration to reduce the search space. A comparison with the global optimisation method SCE-UA showed that only 6% of the calculation time was needed; the conditions total volume, seasonality and shape of the hydrograph were successfully achieved for the calibration and for the cross-validation periods. Furthermore, the parameter sets obtained by the lexicographic calibration strategy for different time periods were much more similar to each other than the parameters obtained by SCE-UA. Besides the similarities of the parameter sets, the goodness-of-fit criteria for the cross-validation were better for the lexicographic approach and the water balance components were also more similar. Thus, we concluded that the resulting parameters were more representative for the corresponding catchments and therefore more suitable for transferability. Time-efficient approximate methods were used to account for parameter uncertainty, confidence intervals and the stability of the solution in the optimum.
Directory of Open Access Journals (Sweden)
Ji Suk SHIM
2015-10-01
Full Text Available Objective This study investigated the marginal and internal adaptation of individual dental crowns fabricated using a CAD/CAM system (Sirona’s BlueCam, also evaluating the effect of the software version used, and the specific parameter settings in the adaptation of crowns.Material and Methods Forty digital impressions of a master model previously prepared were acquired using an intraoral scanner and divided into four groups based on the software version and on the spacer settings used. The versions 3.8 and 4.2 of the software were used, and the spacer parameter was set at either 40 μm or 80 μm. The marginal and internal fit of the crowns were measured using the replica technique, which uses a low viscosity silicone material that simulates the thickness of the cement layer. The data were analyzed using a Friedman two-way analysis of variance (ANOVA and paired t-tests with significance level set at p<0.05.Results The two-way ANOVA analysis showed the software version (p<0.05 and the spacer parameter (p<0.05 significantly affected the crown adaptation. The crowns designed with the version 4.2 of the software showed a better fit than those designed with the version 3.8, particularly in the axial wall and in the inner margin. The spacer parameter was more accurately represented in the version 4.2 of the software than in the version 3.8. In addition, the use of the version 4.2 of the software combined with the spacer parameter set at 80 μm showed the least variation. On the other hand, the outer margin was not affected by the variables.Conclusion Compared to the version 3.8 of the software, the version 4.2 can be recommended for the fabrication of well-fitting crown restorations, and for the appropriate regulation of the spacer parameter.
A NEW DEFORMABLE MODEL USING LEVEL SETS FOR SHAPE SEGMENTALTION
Institute of Scientific and Technical Information of China (English)
He Ning; Zhang Peng; Lu Ke
2009-01-01
In this paper,we present a new deformable model for shape segmentation,which makes two modifications to the original level set implementation of deformable models.The modifications are motivated by difficulties that we have encountered in applying deformable models to segmentation of medical images.The level set algorithm has some advantages over the classical snake deformable models.However,it could develop large gaps in the boundary and holes within the objects.Such boundary gaps and holes of objects can cause inaccurate segmentation that requires manual correction.The proposed method in this paper possesses an inherent property to detect gaps and holes within the object with a single initial contour and also does not require specific initialization.The first modification is to replace the edge detector by some area constraint,and the second modification utilizes weighted length constraint to regularize the curve under evolution.The proposed method has been applied to both synthetic and real images with promising results.
Robust linear parameter varying induction motor control with polytopic models
Directory of Open Access Journals (Sweden)
Dalila Khamari
2013-01-01
Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.
Minimum information modelling of structural systems with uncertain parameters
Hyland, D. C.
1983-01-01
Work is reviewed wherein the design of active structural control is formulated as the mean-square optimal control of a linear mechanical system with stochastic parameters. In practice, a complete probabilistic description of model parameters can never be provided by empirical determinations, and a suitable design approach must accept very limited a priori data on parameter statistics. In consequence, the mean-square optimization problem is formulated using a complete probability assignment which is made to be consistent with available data but maximally unconstrained otherwise through use of a maximum entropy principle. The ramifications of this approach for both robustness and large dimensionality are illustrated by consideration of the full-state feedback regulation problem.
Parameter estimation in a spatial unit root autoregressive model
Baran, Sándor
2011-01-01
Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.
GAUSSIAN MIXTURE MODEL BASED LEVEL SET TECHNIQUE FOR AUTOMATED SEGMENTATION OF CARDIAC MR IMAGES
Directory of Open Access Journals (Sweden)
G. Dharanibai,
2011-04-01
Full Text Available In this paper we propose a Gaussian Mixture Model (GMM integrated level set method for automated segmentation of left ventricle (LV, right ventricle (RV and myocardium from short axis views of cardiacmagnetic resonance image. By fitting GMM to the image histogram, global pixel intensity characteristics of the blood pool, myocardium and background are estimated. GMM provides initial segmentation andthe segmentation solution is regularized using level set. Parameters for controlling the level set evolution are automatically estimated from the Bayesian inference classification of pixels. We propose a new speed function that combines edge and region information that stops the evolving level set at the myocardial boundary. Segmentation efficacy is analyzed qualitatively via visual inspection. Results show the improved performance of our of proposed speed function over the conventional Bayesian driven adaptive speed function in automatic segmentation of myocardium
Nonlinear functional response parameter estimation in a stochastic predator-prey model.
Gilioli, Gianni; Pasquali, Sara; Ruggeri, Fabrizio
2012-01-01
Parameter estimation for the functional response of predator-prey systems is a critical methodological problem in population ecology. In this paper we consider a stochastic predator-prey system with non-linear Ivlev functional response and propose a method for model parameter estimation based on time series of field data. We tackle the problem of parameter estimation using a Bayesian approach relying on a Markov Chain Monte Carlo algorithm. The efficiency of the method is tested on a set of simulated data. Then, the method is applied to a predator-prey system of importance for Integrated Pest Management and biological control, the pest mite Tetranychus urticae and the predatory mite Phytoseiulus persimilis. The model is estimated on a dataset obtained from a field survey. Finally, the estimated model is used to forecast predator-prey dynamics in similar fields, with slightly different initial conditions.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
Moore, Christopher J
2014-01-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this work a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalised over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform...
Global parameter estimation of the Cochlodinium polykrikoides model using bioassay data
Institute of Scientific and Technical Information of China (English)
CHO Hong-Yeon; PARK Kwang-Soon; KIM Sung
2016-01-01
Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide (a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of theC. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of theC. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using thein-situ monitoring data is highly required. As opposed to the bioassay
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
Determining the Walker exponent and developing a modified Smith-Watson-Topper parameter model
Energy Technology Data Exchange (ETDEWEB)
Lv, Zhiqiang; Huang, Hong Zhong; Wang, Hai Kun; Gao, Huiying; Zuo, Fang Jun [University of Electronic Science and Technology of China, Chengdu (China)
2016-03-15
Mean stress effects significantly influence the fatigue life of components. In general, tensile mean stresses are known to reduce the fatigue life of components, whereas compressive mean stresses are known to increase it. To date, various methods that account for mean stress effects have been studied. In this research, considering the high accuracy of mean stress correction and the difficulty in obtaining the material parameter of the Walker method, a practical method is proposed to describe the material parameter of this method. The test data of various materials are then used to verify the proposed practical method. Furthermore, by applying the Walker material parameter and the Smith-Watson-Topper (SWT) parameter, a modified strain-life model is developed to consider sensitivity to mean stress of materials. In addition, three sets of experimental fatigue data from super alloy GH4133, aluminum alloy 7075-T651, and carbon steel are used to estimate the accuracy of the proposed model. A comparison is also made between the SWT parameter method and the proposed strainlife model. The proposed strain-life model provides more accurate life prediction results than the SWT parameter method.
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.
Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K
2014-03-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.
Soong, David T.; Over, Thomas M.
2015-01-01
The Lake Michigan Diversion Accounting (LMDA) system has been developed by the U.S. Army Corps of Engineers, Chicago District (USACE-Chicago) and the State of Illinois as a part of the interstate Great Lakes water regulatory program. The diverted Lake Michigan watershed is a 673-square-mile watershed that is comprised of the Chicago River and Calumet River watersheds. They originally drained into Lake Michigan, but now flow to the Mississippi River watershed via three canals constructed in the Chicago area in the early twentieth century. Approximately 393 square miles of the diverted watershed is ungaged, and the runoff from the ungaged portion of the diverted watershed has been estimated by the USACE-Chicago using the Hydrological Simulation Program-FORTRAN (HSPF) program. The accuracy of simulated runoff depends on the accuracy of the parameter set used in the HSPF program. Nine parameter sets comprised of the North Branch, Little Calumet, Des Plaines, Hickory Creek, CSSC, NIPC, 1999, CTE, and 2008 have been developed at different time periods and used by the USACE-Chicago. In this study, the U.S. Geological Survey and the USACE-Chicago collaboratively analyzed the parameter sets using nine gaged watersheds in or adjacent to the diverted watershed to assess the predictive accuracies of selected parameter sets. Six of the parameter sets, comprising North Branch, Hickory Creek, NIPC, 1999, CTE, and 2008, were applied to the nine gaged watersheds for evaluating their simulation accuracy from water years 1996 to 2011. The nine gaged watersheds were modeled by using the three LMDA land-cover types (grass, forest, and hydraulically connected imperviousness) based on the 2006 National Land Cover Database, and the latest meteorological and precipitation data consistent with the current (2014) LMDA modeling framework.
Recursive modular modelling methodology for lumped-parameter dynamic systems.
Orsino, Renato Maia Matarazzo
2017-08-01
This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.
Determination and Validation of Parameters for Riedel-Hiermaier-Thoma Concrete Model
Directory of Open Access Journals (Sweden)
Yu-Qing Ding
2013-09-01
Full Text Available Numerical modelling of the complex physical processes such as concrete structures subjected to high-impulsive loads relies on suitable material models appropriate for impact and explosion problems. One of theextensive used concrete material models, the RHT model, contains all essential features of concrete materialssubjected to high dynamic loading. However, the application of the RHT model requires a set of material propertiesand model parameters without which reliable results cannot be expected. The present paper provides adetailed valuation of the RHT model and proposes a method of determining the model parameters for C40 concrete.Furthermore, the dynamic compressive and tensile strength function of the model formulation are modified toenhance the performance of the model as implemented in the hydrocode AUTODYN. The performance of thedetermined parameters of the modified RHT model is demonstrated by comparing to available experimentaldata, and further verified via simulations of physical experiments of concrete penetration by steel projectiles.The results of numerical analyses are found closely match the penetration depth and the crater size in the frontsurface of the concrete targets.Defence Science Journal, 2013, 63(5, pp.524-530, DOI:http://dx.doi.org/10.14429/dsj.63.3866
Determination and Validation of Parameters for Riedel-Hiermaier-Thoma Concrete Model
Directory of Open Access Journals (Sweden)
Yu-Qing Ding
2013-09-01
Full Text Available Numerical modelling of the complex physical processes such as concrete structures subjected to high- impulsive loads relies on suitable material models appropriate for impact and explosion problems. One of the extensive used concrete material models, the RHT model, contains all essential features of concrete materials subjected to high dynamic loading. However, the application of the RHT model requires a set of material properties and model parameters without which reliable results cannot be expected. The present paper provides a detailed valuation of the RHT model and proposes a method of determining the model parameters for C40 concrete. Furthermore, the dynamic compressive and tensile strength function of the model formulation are modified to enhance the performance of the model as implemented in the hydrocode AUTODYN. The performance of the determined parameters of the modified RHT model is demonstrated by comparing to available experimental data, and further verified via simulations of physical experiments of concrete penetration by steel projectiles. The results of numerical analyses are found closely match the penetration depth and the crater size in the front surface of the concrete targets.
Zhou, Liming; Yang, Yuxing; Yuan, Shiying
2006-02-01
A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.
Energy Technology Data Exchange (ETDEWEB)
Kumar, Prashant, E-mail: prashantkumar@csio.res.in [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Bansod, Baban K.S.; Debnath, Sanjit K. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India); Thakur, Praveen Kumar [Indian Institute of Remote Sensing (ISRO), Dehradun 248001 (India); Ghanshyam, C. [CSIR-Central Scientific Instruments Organisation, Chandigarh 160030 (India); Academy of Scientific and Innovative Research—CSIO, Chandigarh 160030 (India)
2015-02-15
Groundwater vulnerability maps are useful for decision making in land use planning and water resource management. This paper reviews the various groundwater vulnerability assessment models developed across the world. Each model has been evaluated in terms of its pros and cons and the environmental conditions of its application. The paper further discusses the validation techniques used for the generated vulnerability maps by various models. Implicit challenges associated with the development of the groundwater vulnerability assessment models have also been identified with scientific considerations to the parameter relations and their selections. - Highlights: • Various index-based groundwater vulnerability assessment models have been discussed. • A comparative analysis of the models and its applicability in different hydrogeological settings has been discussed. • Research problems of underlying vulnerability assessment models are also reported in this review paper.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2014-01-01
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...
Bernard, O; Hadj-Sadok, Z; Dochain, D; Genovesi, A; Steyer, J P
2001-11-20
This paper deals with the development and the parameter identification of an anaerobic digestion process model. A two-step (acidogenesis-methanization) mass-balance model has been considered. The model incorporates electrochemical equilibria in order to include the alkalinity, which has to play a central role in the related monitoring and control strategy of a treatment plant. The identification is based on a set of dynamical experiments designed to cover a wide spectrum of operating conditions that are likely to take place in the practical operation of the plant. A step by step identification procedure to estimate the model parameters is presented. The results of 70 days of experiments in a 1-m(3) fermenter are then used to validate the model.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Auxiliary Parameter MCMC for Exponential Random Graph Models
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Determining avalanche modelling input parameters using terrestrial laser scanning technology
2013-01-01
International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...
Numerical model for thermal parameters in optical materials
Sato, Yoichi; Taira, Takunori
2016-04-01
Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.
Land Building Models: Uncertainty in and Sensitivity to Input Parameters
2013-08-01
Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a
The oblique S parameter in higgsless electroweak models
Rosell, Ignasi
2012-01-01
We present a one-loop calculation of the oblique S parameter within Higgsless models of electroweak symmetry breaking. We have used a general effective Lagrangian with at most two derivatives, implementing the chiral symmetry breaking SU(2)_L x SU(2)_R -> SU(2)_{L+R} with Goldstones, gauge bosons and one multiplet of vector and axial-vector resonances. The estimation is based on the short-distance constraints and the dispersive approach proposed by Peskin and Takeuchi.
A statistical model of proton with no parameter
Zhang, Y; Zhang, Yongjun; Yang, Li-Ming
2001-01-01
In this text, the protons are taken as an ensemble of Fock states. Using detailed balancing principle and equal probability principle, the unpolarized parton distribution of proton is gained through Monte Carlo without any parameter. A new origin of the light flavor sea-quark asymmetry is given here beside known models as Pauli blocking, meson-cloud, chiral-field, chiral-soliton and instantons.
Model of the Stochastic Vacuum and QCD Parameters
Ferreira, E; Ferreira, Erasmo; Pereira, Flávio
1997-01-01
Accounting for the two independent correlation functions of the QCD vacuum, we improve the simple and consistent description given by the model of the stochastic vacuum to the high-energy pp and pbar-p data, with a new determination of parameters of non-perturbative QCD. The increase of the hadronic radii with the energy accounts for the energy dependence of the observables.
Is flow velocity a significant parameter in flood damage modelling?
Directory of Open Access Journals (Sweden)
H. Kreibich
2009-10-01
Full Text Available Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2 m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.
A robust approach for the determination of Gurson model parameters
Directory of Open Access Journals (Sweden)
R. Sepe
2016-07-01
Full Text Available Among the most promising models introduced in recent years, with which it is possible to obtain very useful results for a better understanding of the physical phenomena involved in the macroscopic mechanism of crack propagation, the one proposed by Gurson and Tvergaard links the propagation of a crack to the nucleation, growth and coalescence of micro-voids, which is likely to connect the micromechanical characteristics of the component under examination to crack initiation and propagation up to a macroscopic scale. It must be pointed out that, even if the statistical character of some of the many physical parameters involved in the said model has been put in evidence, no serious attempt has been made insofar to link the corresponding statistic to the experimental and macroscopic results, as for example crack initiation time, material toughness, residual strength of the cracked component (R-Curve, and so on. In this work, such an analysis was carried out in a twofold way: the former concerned the study of the influence exerted by each of the physical parameters on the material toughness, and the latter concerned the use of the Stochastic Design Improvement (SDI technique to perform a “robust” numerical calibration of the model evaluating the nominal values of the physical and correction parameters, which fit a particular experimental result even in the presence of their “natural” variability.
Real time measurement of plasma macroscopic parameters on RFX-mod using a limited set of sensors
Kudlacek, Ondrej; Zanca, Paolo; Finotti, Claudio; Marchiori, Giuseppe; Cavazzana, Roberto; Marrelli, Lionello
2015-10-01
A method to estimate the plasma boundary and global parameters such as βp+li/2 and the edge safety factor q95 is described. The method is based on poloidal flux extrapolation in the vacuum region between the plasma and the magnetic measurements, and it is efficient and accurate even if a limited set of sensors is used. The discrepancy between the plasma boundary provided by this method and the boundary computed by the Grad-Shafranov solver MAXFEA is lower than 8 mm in all the considered cases. Moreover, the method is robust against the noise level present in the RFX-mod measurements. The difference between the estimated global parameters and the MAXFEA simulation results is lower than 4%. The method was finally implemented in the RFX-mod shape control system, working at 5 kHz cycle frequency, to provide a reliable set of plasma-wall distances (gaps) used as feedback signals. Experimental results obtained in one year of RFX-mod operation are shown.
Keceli, Feyza; Ayanoglu, Ender
2008-01-01
We present the station-based unfair access problem among the uplink and the downlink stations in the IEEE 802.11e infrastructure Basic Service Set (BSS) when the default settings of the Enhanced Distributed Channel Access (EDCA) parameters are used. We discuss how the transport layer protocol characteristics alleviate the unfairness problem. We design a simple, practical, and standard-compliant framework to be employed at the Access Point (AP) for fair and efficient access provisioning. A dynamic measurement-based EDCA parameter adaptation block lies in the core of this framework. The proposed framework is unique in the sense that it considers the characteristic differences of Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows and the coexistence of stations with varying bandwidth or Quality-of-Service (QoS) requirements. Via simulations, we show that our solution provides short- and long-term fair access for all stations in the uplink and downlink employing TCP and UDP flows with non-...
Comparison of conventional ERG parameters and high-intensity A-wave analysis in a clinical setting.
Marmor, Michael F; Serrato, Alexandra; Tzekov, Radouil
2003-05-01
Computational analysis of high-intensity a-waves yields direct information about the rod and cone receptor potential. However, it is not clear whether such information adds materially to the diagnostic value of the standard ERG in a routine clinical setting. We recorded both conventional ISCEV standard and computational high intensity ERG parameters from 38 patients referred to a clinical laboratory for ERG testing, and also from eight normal volunteers. The patients were grouped as: (1) macular dysfunction; (2) diffuse cone dysfunction; (3) diffuse rod-cone dysfunction. The results showed moderate variation in both conventional and computational parameters, but in general a similar pattern of normality or abnormality for both among the disease groups. There were only a few outlying subjects for which one or the other approach seemed more sensitive. We conclude that a-wave analysis is an important tool for clinical research and the study of special patients, but adding it to the standard ERG protocol does not, at our present state of knowledge, add markedly to clinical evaluations in a routine clinical setting.
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Directory of Open Access Journals (Sweden)
Ana Pilipović
2014-03-01
Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.
Nonlocal order parameters for the 1D Hubbard model.
Montorsi, Arianna; Roncaglia, Marco
2012-12-07
We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Estimation of parameters in a distributed precipitation-runoff model for Norway
Directory of Open Access Journals (Sweden)
S. Beldring
2003-01-01
Full Text Available A distributed version of the HBV-model using 1 km2 grid cells and daily time step was used to simulate runoff from the entire land surface of Norway for the period 1961-1990. The model was sensitive to changes in small scale properties of the land surface and the climatic input data, through explicit representation of differences between model elements, and by implicit consideration of sub-grid variations in moisture status. A geographically transferable set of model parameters was determined by a multi-criteria calibration strategy, which simultaneously minimised the residuals between model simulated and observed runoff from 141 Norwegian catchments located in areas with different runoff regimes and landscape characteristics. Model discretisation units with identical landscape classification were assigned similar parameter values. Model performance was evaluated by simulating discharge from 43 independent catchments. Finally, a river routing procedure using a kinematic wave approximation to open channel flow was introduced in the model, and discharges from three additional catchments were calculated and compared with observations. The model was used to produce a map of average annual runoff for Norway for the period 1961-1990. Keywords: distributed model, multi-criteria calibration, global parameters, ungauged catchments.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.