WorldWideScience

Sample records for model parameter posterior

  1. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  2. Interrelation of posterior cranial fossa parameters and size characteristics of human skull in different craniotypes

    Directory of Open Access Journals (Sweden)

    Bukreeva E.G.

    2011-03-01

    Full Text Available The aim of this work was to study the correlation between the linear dimensions of the posterior cranial fossa and linear and angular parameters of the skull, depending on the magnitude of basilar angle. Material studies provided one hundred skulls of adult humans, divided into three craniotypes. Used by craniotopometric method measurements of these parameters with subsequent calculation of estimated average values and drawing the correlation model. The results showed that the most intimate degree of multidirectional communication studied parameters were observed in platibasilar craniotype have flexibasilar craniotype strong positive dependence is present in the width of the posterior fossa, the mediobasilar craniotipe connection parameters predominantly moderate and mild. Dimensions cerebellar pits exposed to greater variability

  3. A Note on the Existence of the Posteriors for One-way Random Effect Probit Models.

    Science.gov (United States)

    Lin, Xiaoyan; Sun, Dongchu

    2010-01-01

    The existence of the posterior distribution for one-way random effect probit models has been investigated when the uniform prior is applied to the overall mean and a class of noninformative priors are applied to the variance parameter. The sufficient conditions to ensure the propriety of the posterior are given for the cases with replicates at some factor levels. It is shown that the posterior distribution is never proper if there is only one observation at each factor level. For this case, however, a class of proper priors for the variance parameter can provide the necessary and sufficient conditions for the propriety of the posterior.

  4. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  5. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    Science.gov (United States)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  6. Bayesian analysis for OPC modeling with film stack properties and posterior predictive checking

    Science.gov (United States)

    Burbine, Andrew; Fenger, Germain; Sturtevant, John; Fryer, David

    2016-10-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and analysis techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper expands upon Bayesian analysis methods for parameter selection in lithographic models by increasing the parameter set and employing posterior predictive checks. Work continues with a Markov chain Monte Carlo (MCMC) search algorithm to generate posterior distributions of parameters. Models now include wafer film stack refractive indices, n and k, as parameters, recognizing the uncertainties associated with these values. Posterior predictive checks are employed as a method to validate parameter vectors discovered by the analysis, akin to cross validation.

  7. Posterior Circulation Stroke: Animal Models and Mechanism of Disease

    Directory of Open Access Journals (Sweden)

    Tim Lekic

    2012-01-01

    Full Text Available Posterior circulation stroke refers to the vascular occlusion or bleeding, arising from the vertebrobasilar vasculature of the brain. Clinical studies show that individuals who experience posterior circulation stroke will develop significant brain injury, neurologic dysfunction, or death. Yet the therapeutic needs of this patient subpopulation remain largely unknown. Thus understanding the causative factors and the pathogenesis of brain damage is important, if posterior circulation stroke is to be prevented or treated. Appropriate animal models are necessary to achieve this understanding. This paper critically integrates the neurovascular and pathophysiological features gleaned from posterior circulation stroke animal models into clinical correlations.

  8. DREAM(D: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems

    Directory of Open Access Journals (Sweden)

    J. A. Vrugt

    2011-04-01

    Full Text Available Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM algorithm, an adaptive Markov Chain Monte Carlo (MCMC method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D, has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS (Vrugt et al., 2011 and MT-DREAM(ZS (Laloy and Vrugt, 2011 as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D.

  9. Lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. The lumped-parameter model development have been reported by (Wolf 1991b; Wolf 1991a; Wolf and Paronesso 1991; Wolf and Paronesso 19...

  10. Response model parameter linking

    NARCIS (Netherlands)

    Barrett, Michelle Derbenwick

    2015-01-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require

  11. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    2011-01-01

    Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...

  12. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Bin, E-mail: bins@ieee.org [School of Computer Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing 210023 (China)

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  13. Assessing Fit of Alternative Unidimensional Polytomous IRT Models Using Posterior Predictive Model Checking.

    Science.gov (United States)

    Li, Tongyun; Xie, Chao; Jiao, Hong

    2016-05-30

    This article explored the application of the posterior predictive model checking (PPMC) method in assessing fit for unidimensional polytomous item response theory (IRT) models, specifically the divide-by-total models (e.g., the generalized partial credit model). Previous research has primarily focused on using PPMC in model checking for unidimensional and multidimensional IRT models for dichotomous data, and has paid little attention to polytomous models. A Monte Carlo simulation was conducted to investigate the performance of PPMC in detecting different sources of misfit for the partial credit model family. Results showed that the PPMC method, in combination with appropriate discrepancy measures, had adequate power in detecting different sources of misfit for the partial credit model family. Global odds ratio and item total correlation exhibited specific patterns in detecting the absence of the slope parameter, whereas Yen's Q1 was found to be promising in the detection of misfit caused by the constant category intersection parameter constraint across items. (PsycINFO Database Record

  14. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  15. An Enhanced Informed Watermarking Scheme Using the Posterior Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Chuntao Wang

    2014-01-01

    Full Text Available Designing a practical watermarking scheme with high robustness, feasible imperceptibility, and large capacity remains one of the most important research topics in robust watermarking. This paper presents a posterior hidden Markov model (HMM- based informed image watermarking scheme, which well enhances the practicability of the prior-HMM-based informed watermarking with favorable robustness, imperceptibility, and capacity. To make the encoder and decoder use the (nearly identical posterior HMM, each cover image at the encoder and each received image at the decoder are attacked with JPEG compression at an equivalently small quality factor (QF. The attacked images are then employed to estimate HMM parameter sets for both the encoder and decoder, respectively. Numerical simulations show that a small QF of 5 is an optimum setting for practical use. Based on this posterior HMM, we develop an enhanced posterior-HMM-based informed watermarking scheme. Extensive experimental simulations show that the proposed scheme is comparable to its prior counterpart in which the HMM is estimated with the original image, but it avoids the transmission of the prior HMM from the encoder to the decoder. This thus well enhances the practical application of HMM-based informed watermarking systems. Also, it is demonstrated that the proposed scheme has the robustness comparable to the state-of-the-art with significantly reduced computation time.

  16. An enhanced informed watermarking scheme using the posterior hidden Markov model.

    Science.gov (United States)

    Wang, Chuntao

    2014-01-01

    Designing a practical watermarking scheme with high robustness, feasible imperceptibility, and large capacity remains one of the most important research topics in robust watermarking. This paper presents a posterior hidden Markov model (HMM-) based informed image watermarking scheme, which well enhances the practicability of the prior-HMM-based informed watermarking with favorable robustness, imperceptibility, and capacity. To make the encoder and decoder use the (nearly) identical posterior HMM, each cover image at the encoder and each received image at the decoder are attacked with JPEG compression at an equivalently small quality factor (QF). The attacked images are then employed to estimate HMM parameter sets for both the encoder and decoder, respectively. Numerical simulations show that a small QF of 5 is an optimum setting for practical use. Based on this posterior HMM, we develop an enhanced posterior-HMM-based informed watermarking scheme. Extensive experimental simulations show that the proposed scheme is comparable to its prior counterpart in which the HMM is estimated with the original image, but it avoids the transmission of the prior HMM from the encoder to the decoder. This thus well enhances the practical application of HMM-based informed watermarking systems. Also, it is demonstrated that the proposed scheme has the robustness comparable to the state-of-the-art with significantly reduced computation time.

  17. Adjoint based data assimilation for phase field model using second order information of a posterior distribution

    Science.gov (United States)

    Ito, Shin-Ichi; Nagao, Hiromichi; Yamanaka, Akinori; Tsukada, Yuhki; Koyama, Toshiyuki; Inoue, Junya

    Phase field (PF) method, which phenomenologically describes dynamics of microstructure evolutions during solidification and phase transformation, has progressed in the fields of hydromechanics and materials engineering. How to determine, based on observation data, an initial state and model parameters involved in a PF model is one of important issues since previous estimation methods require too much computational cost. We propose data assimilation (DA), which enables us to estimate the parameters and states by integrating the PF model and observation data on the basis of the Bayesian statistics. The adjoint method implemented on DA not only finds an optimum solution by maximizing a posterior distribution but also evaluates the uncertainty in the estimations by utilizing the second order information of the posterior distribution. We carried out an estimation test using synthetic data generated by the two-dimensional Kobayashi's PF model. The proposed method is confirmed to reproduce the true initial state and model parameters we assume in advance, and simultaneously estimate their uncertainties due to quality and quantity of the data. This result indicates that the proposed method is capable of suggesting the experimental design to achieve the required accuracy.

  18. The femoro-sacral posterior angle: an anatomical sagittal pelvic parameter usable with dome-shaped sacrum.

    Science.gov (United States)

    Legaye, Jean

    2007-02-01

    The sagittal pelvic morphology modulates the individual alignment of the spine. Anatomical angular parameters were described as follows: the "Pelvic Incidence" (PI) and the Jackson's angle "Pelvic Lordosis" (PR-S1). Significant chains of relationships were expressed connecting these angles with pelvic and spinal positional parameters. This allows an individual assessment of the harmony of the sagittal spinal balance. But in case of spondylolysis with high-grade listhesis, the upper plate of the sacrum shows a dome-shaped deformity. The previous anatomical parameters are therefore imprecise. Indeed, the anterior part of the sacrum being inaccurate, an exact assessment of these angles becomes impossible. Therefore, we propose a new angular parameter named "Femoro-Sacral Posterior Angle" (FSPA): the angle between the posterior wall of the first sacral vertebra, always well definite, and the line connecting the posterior part of the sacral plate to the femoral axis. The validation of this parameter was performed and compared with the classical published parameters. It showed good inter-observer reliability, even with dome-shaped sacral plate. In spite of lower correlation with the positional parameters than those observed with PI or PR-S1, the FSPA appeared to be reliable and precise for an exact evaluation of the sagittal spino-pelvic balance is case of spondylo-listhesis with dome-shaped sacral endplate.

  19. Photovoltaic module parameters acquisition model

    Science.gov (United States)

    Cibira, Gabriel; Koščová, Marcela

    2014-09-01

    This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.

  20. A dirichlet process covarion mixture model and its assessments using posterior predictive discrepancy tests.

    Science.gov (United States)

    Zhou, Yan; Brinkmann, Henner; Rodrigue, Nicolas; Lartillot, Nicolas; Philippe, Hervé

    2010-02-01

    Heterotachy, the variation of substitution rate at a site across time, is a prevalent phenomenon in nucleotide and amino acid alignments, which may mislead probabilistic-based phylogenetic inferences. The covarion model is a special case of heterotachy, in which sites change between the "ON" state (allowing substitutions according to any particular model of sequence evolution) and the "OFF" state (prohibiting substitutions). In current implementations, the switch rates between ON and OFF states are homogeneous across sites, a hypothesis that has never been tested. In this study, we developed an infinite mixture model, called the covarion mixture (CM) model, which allows the covarion parameters to vary across sites, controlled by a Dirichlet process prior. Moreover, we combine the CM model with other approaches. We use a second independent Dirichlet process that models the heterogeneities of amino acid equilibrium frequencies across sites, known as the CAT model, and general rate-across-site heterogeneity is modeled by a gamma distribution. The application of the CM model to several large alignments demonstrates that the covarion parameters are significantly heterogeneous across sites. We describe posterior predictive discrepancy tests and use these to demonstrate the importance of these different elements of the models.

  1. Mode choice model parameters estimation

    OpenAIRE

    Strnad, Irena

    2010-01-01

    The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...

  2. Experimental Model of Proximal Junctional Fracture after Multilevel Posterior Spinal Instrumentation

    Science.gov (United States)

    Levasseur, Annie; Parent, Stefan; Petit, Yvan

    2016-01-01

    There is a high risk of proximal junctional fractures (PJF) with multilevel spinal instrumentation, especially in the osteoporotic spine. This problem is associated with significant morbidity and possibly the need for reoperation. Various techniques have been proposed in an attempt to decrease the risk of PJF but there is no experimental model described for in vitro production of PJF after multilevel instrumentation. The objective of this study is to develop an experimental model of PJF after multilevel posterior instrumentation. Initially, four porcine specimens including 4 vertebrae and instrumented at the 3 caudal vertebrae using a pedicle screw construct were subjected to different loading conditions. Loading conditions on porcine specimens involving cyclic loading along the axis of the center vertebral body line, with constrained flexion between 0° and 15° proximally, and fully constraining the specimen distally resulted in a fracture pattern most representative of a PJF seen clinically in humans, so to undergo human cadaveric testing with similar loading conditions was decided. Clinically relevant PJF were produced in all 3 human specimens. The experimental model described in this study will allow the evaluation of different parameters influencing the incidence and prevention of PJF after multilevel posterior spinal instrumentation. PMID:27610381

  3. Oracle posterior rates in the White Noise Model

    NARCIS (Netherlands)

    Babenko, A.

    2010-01-01

    All the results about posterior rates obtained until now are related to the optimal (minimax) rates for the estimation problem over the corresponding nonparametric smoothness classes, i.e. of a global nature. In the meantime, a new local approach to optimality has been developed within the estimatio

  4. Bayesian parameter inference and model selection by population annealing in systems biology.

    Science.gov (United States)

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  5. Parameter estimation of hydrologic models using data assimilation

    Science.gov (United States)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  6. Effect of Correlations Between Model Parameters and Nuisance Parameters When Model Parameters are Fit to Data

    CERN Document Server

    Roe, Byron

    2013-01-01

    The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.

  7. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  8. The Asymptotic Posterior Normality of the Latent Trait for Polytomous IRT Models.

    Science.gov (United States)

    Chang, Hua-Hua

    1996-01-01

    H. H. Chang and W. F. Stout (1993) presented a derivation of the asymptotic posterior normality of the latent trait given examinee responses under nonrestrictive nonparametric assumptions for dichotomous item response (IRT) theory models. This paper presents an extension of their results to polytomous IRT models and defines a global information…

  9. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2013-04-01

    Full Text Available This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4. Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent – as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty

  10. A Forward Incremental Prestressing Method with Application to Inverse Parameter Estimations and Eye-Specific Simulations of Posterior Scleral Shells

    Science.gov (United States)

    Downs, J. Crawford

    2012-01-01

    Numerical simulations or inverse numerical analyses of individual eyes or eye segments are often based on an eye-specific geometry obtained from in vivo medical images such as CT scans or from in vitro 3D digitizer scans. These eye-specific geometries are usually measured while the eye is subjected to internal pressure. Due to the nonlinear stiffening of the collagen fibril network in the eye, numerical incorporation of the pre-existing stress/strain state may be essential for realistic eye-specific computational simulations. Existing prestressing methods either compute accurate predictions of the prestressed state or guarantee a unique solution. In this contribution, a forward incremental pre-stressing method is presented that unifies the advantages of the existing approaches by providing accurate and unique predictions of the pre-existing stress/strain state at the true measured geometry. The impact of prestressing is investigated on (i) the inverse constitutive parameter identification of a synthetic sclera inflation test and (ii) an eye-specific simulation that estimates the realistic mechanical response of a preloaded posterior monkey scleral shell. Evaluation of the pre-existing stress/strain state in the inverse analysis had a significant impact on the reproducibility of the constitutive parameters but may be estimated based on an approximative approach. The eye-specific simulation of one monkey eye shows that prestressing is required for accurate displacement and stress/strain predictions. The numerical results revealed an increasing error in displacement, strain and stress predictions with increasing pre-existing pressure load when the pre-stress/strain state is disregarded. Disregarding the prestress may lead to a significant underestimation of the strain/stress environment in the sclera and overestimation in the lamina cribrosa. PMID:22224843

  11. DREAM(D: an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter estimation problems

    Directory of Open Access Journals (Sweden)

    C. J. F. Ter Braak

    2011-12-01

    Full Text Available Formal and informal Bayesian approaches have found widespread implementation and use in environmental modeling to summarize parameter and predictive uncertainty. Successful implementation of these methods relies heavily on the availability of efficient sampling methods that approximate, as closely and consistently as possible the (evolving posterior target distribution. Much of this work has focused on continuous variables that can take on any value within their prior defined ranges. Here, we introduce theory and concepts of a discrete sampling method that resolves the parameter space at fixed points. This new code, entitled DREAM(D uses the recently developed DREAM algorithm (Vrugt et al., 2008, 2009a, b as its main building block but implements two novel proposal distributions to help solve discrete and combinatorial optimization problems. This novel MCMC sampler maintains detailed balance and ergodicity, and is especially designed to resolve the emerging class of optimal experimental design problems. Three different case studies involving a Sudoku puzzle, soil water retention curve, and rainfall – runoff model calibration problem are used to benchmark the performance of DREAM(D. The theory and concepts developed herein can be easily integrated into other (adaptive MCMC algorithms.

  12. Model and parameter uncertainty in IDF relationships under climate change

    Science.gov (United States)

    Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.

    2015-05-01

    Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.

  13. Optimal Parameter and Uncertainty Estimation of a Land Surface Model: Sensitivity to Parameter Ranges and Model Complexities

    Institute of Scientific and Technical Information of China (English)

    Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  14. PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL

    Institute of Scientific and Technical Information of China (English)

    钱炜祺; 蔡金狮

    2001-01-01

    A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.

  15. A Bayesian posterior predictive framework for weighting ensemble regional climate models

    Science.gov (United States)

    Fan, Yanan; Olson, Roman; Evans, Jason P.

    2017-06-01

    We present a novel Bayesian statistical approach to computing model weights in climate change projection ensembles in order to create probabilistic projections. The weight of each climate model is obtained by weighting the current day observed data under the posterior distribution admitted under competing climate models. We use a linear model to describe the model output and observations. The approach accounts for uncertainty in model bias, trend and internal variability, including error in the observations used. Our framework is general, requires very little problem-specific input, and works well with default priors. We carry out cross-validation checks that confirm that the method produces the correct coverage.

  16. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    Science.gov (United States)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally

  17. Bayesian model selection for incomplete data using the posterior predictive distribution.

    Science.gov (United States)

    Daniels, Michael J; Chatterjee, Arkendu S; Wang, Chenguang

    2012-12-01

    We explore the use of a posterior predictive loss criterion for model selection for incomplete longitudinal data. We begin by identifying a property that most model selection criteria for incomplete data should consider. We then show that a straightforward extension of the Gelfand and Ghosh (1998, Biometrika, 85, 1-11) criterion to incomplete data has two problems. First, it introduces an extra term (in addition to the goodness of fit and penalty terms) that compromises the criterion. Second, it does not satisfy the aforementioned property. We propose an alternative and explore its properties via simulations and on a real dataset and compare it to the deviance information criterion (DIC). In general, the DIC outperforms the posterior predictive criterion, but the latter criterion appears to work well overall and is very easy to compute unlike the DIC in certain classes of models for missing data.

  18. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  19. Posterior gut development in Drosophila:a model system for identifying genes controlling epithelial morphogenesis

    Institute of Scientific and Technical Information of China (English)

    LENGYELJUEITHA; SUEJUNLIU

    1998-01-01

    The posterior gut of the Drosophila embryo,consisting of hindgut and Malpighian tubules,provides a simple,well-defined system where it is possible to use a genetic approach to define components essential for epithelial morphogenesis.We review here the advantages of Drosophila as a model genetic organism,the morphogenesis of the epithelial structures of the posterior gut,and what is known about the genetic requirements to form these structures.In overview,primordia are patterned by expression of hierarchies of transcription factors;this leads to localized expression of cell signaling molecules,and finally,to the least understood step:modulation of cell adhesion and cell shape.We describe approaches to identify additional genes that are required for morphogenesis of these simple epithelia,particularly those that might play a structural role by affecting cell adhesion and cell shape.

  20. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  1. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    OpenAIRE

    Hadiyanto Hadiyanto; AJB van Boxtel

    2012-01-01

    Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...

  2. Parameter counting in models with global symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Joshua [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: jb454@cornell.edu; Grossman, Yuval [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: yuvalg@lepp.cornell.edu

    2009-05-18

    We present rules for determining the number of physical parameters in models with exact flavor symmetries. In such models the total number of parameters (physical and unphysical) needed to described a matrix is less than in a model without the symmetries. Several toy examples are studied in order to demonstrate the rules. The use of global symmetries in studying the minimally supersymmetric standard model (MSSM) is examined.

  3. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...

  4. Cosmological models with constant deceleration parameter

    Energy Technology Data Exchange (ETDEWEB)

    Berman, M.S.; de Mello Gomide, F.

    1988-02-01

    Berman presented elsewhere a law of variation for Hubble's parameter that yields constant deceleration parameter models of the universe. By analyzing Einstein, Pryce-Hoyle and Brans-Dicke cosmologies, we derive here the necessary relations in each model, considering a perfect fluid.

  5. Trait Characteristics of Diffusion Model Parameters

    Directory of Open Access Journals (Sweden)

    Anna-Lena Schubert

    2016-07-01

    Full Text Available Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.

  6. Parameter identification in the logistic STAR model

    DEFF Research Database (Denmark)

    Ekner, Line Elvstrøm; Nejstgaard, Emil

    We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...

  7. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    Science.gov (United States)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  8. Application of lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)

  9. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  10. Statefinder parameters in two dark energy models

    CERN Document Server

    Panotopoulos, Grigoris

    2007-01-01

    The statefinder parameters ($r,s$) in two dark energy models are studied. In the first, we discuss in four-dimensional General Relativity a two fluid model, in which dark energy and dark matter are allowed to interact with each other. In the second model, we consider the DGP brane model generalized by taking a possible energy exchange between the brane and the bulk into account. We determine the values of the statefinder parameters that correspond to the unique attractor of the system at hand. Furthermore, we produce plots in which we show $s,r$ as functions of red-shift, and the ($s-r$) plane for each model.

  11. Parameter Symmetry of the Interacting Boson Model

    CERN Document Server

    Shirokov, A M; Smirnov, Yu F; Shirokov, Andrey M.; Smirnov, Yu. F.

    1998-01-01

    We discuss the symmetry of the parameter space of the interacting boson model (IBM). It is shown that for any set of the IBM Hamiltonian parameters (with the only exception of the U(5) dynamical symmetry limit) one can always find another set that generates the equivalent spectrum. We discuss the origin of the symmetry and its relevance for physical applications.

  12. Wind Farm Decentralized Dynamic Modeling With Parameters

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran;

    2010-01-01

    Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...

  13. Setting Parameters for Biological Models With ANIMO

    NARCIS (Netherlands)

    Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran

    2014-01-01

    ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions

  14. A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model

    CERN Document Server

    Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2014-01-01

    Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...

  15. Delineating Parameter Unidentifiabilities in Complex Models

    CERN Document Server

    Raman, Dhruva V; Papachristodoulou, Antonis

    2016-01-01

    Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...

  16. Evaluating the Adequacy of Molecular Clock Models Using Posterior Predictive Simulations.

    Science.gov (United States)

    Duchêne, David A; Duchêne, Sebastian; Holmes, Edward C; Ho, Simon Y W

    2015-11-01

    Molecular clock models are commonly used to estimate evolutionary rates and timescales from nucleotide sequences. The goal of these models is to account for rate variation among lineages, such that they are assumed to be adequate descriptions of the processes that generated the data. A common approach for selecting a clock model for a data set of interest is to examine a set of candidates and to select the model that provides the best statistical fit. However, this can lead to unreliable estimates if all the candidate models are actually inadequate. For this reason, a method of evaluating absolute model performance is critical. We describe a method that uses posterior predictive simulations to assess the adequacy of clock models. We test the power of this approach using simulated data and find that the method is sensitive to bias in the estimates of branch lengths, which tends to occur when using underparameterized clock models. We also compare the performance of the multinomial test statistic, originally developed to assess the adequacy of substitution models, but find that it has low power in identifying the adequacy of clock models. We illustrate the performance of our method using empirical data sets from coronaviruses, simian immunodeficiency virus, killer whales, and marine turtles. Our results indicate that methods of investigating model adequacy, including the one proposed here, should be routinely used in combination with traditional model selection in evolutionary studies. This will reveal whether a broader range of clock models to be considered in phylogenetic analysis.

  17. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  18. Parameter Estimation, Model Reduction and Quantum Filtering

    CERN Document Server

    Chase, Bradley A

    2009-01-01

    This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.

  19. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  20. Delineating parameter unidentifiabilities in complex models

    Science.gov (United States)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  1. Systematic parameter inference in stochastic mesoscopic modeling

    Science.gov (United States)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  2. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...

  3. Models and parameters for environmental radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C W [ed.

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  4. Estimation of Model Parameters for Steerable Needles

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  5. Estimation of Model Parameters for Steerable Needles.

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.

  6. An Optimization Model of Tunnel Support Parameters

    Directory of Open Access Journals (Sweden)

    Su Lijuan

    2015-05-01

    Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.

  7. Analysis of Modeling Parameters on Threaded Screws.

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  8. Biomechanical comparison of anterior Caspar plate and three-level posterior fixation techniques in a human cadaveric model.

    Science.gov (United States)

    Traynelis, V C; Donaher, P A; Roach, R M; Kojimoto, H; Goel, V K

    1993-07-01

    Traumatic cervical spine injuries have been successfully stabilized with plates applied to the anterior vertebral bodies. Previous biomechanical studies suggest, however, that these devices may not provide adequate stability if the posterior ligaments are disrupted. To study this problem, the authors simulated a C-5 teardrop fracture with posterior ligamentous instability in human cadaveric spines. This model was used to compare the immediate biomechanical stability of anterior cervical plating, from C-4 to C-6, to that provided by a posterior wiring construct over the same levels. Stability was tested in six modes of motion: flexion, extension, right and left lateral bending, and right and left axial rotation. The injured/plate-stabilized spines were more stable than the intact specimens in all modes of testing. The injured/posterior-wired specimens were more stable than the intact spines in axial rotation and flexion. They were not as stable as the intact specimens in the lateral bending or extension testing modes. The data were normalized with respect to the motion of the uninjured spine and compared using repeated measures of analysis of variance, the results of which indicate that anterior plating provides significantly more stability in extension and lateral bending than does posterior wiring. The plate was more stable than the posterior construct in flexion loading; however, the difference was not statistically significant. The two constructs provide similar stability in axial rotation. This study provides biomechanical support for the continued use of bicortical anterior plate fixation in the setting of traumatic cervical spine instability.

  9. The Lund Model at Nonzero Impact Parameter

    CERN Document Server

    Janik, R A; Janik, Romuald A.; Peschanski, Robi

    2003-01-01

    We extend the formulation of the longitudinal 1+1 dimensional Lund model to nonzero impact parameter using the minimal area assumption. Complete formulae for the string breaking probability and the momenta of the produced mesons are derived using the string worldsheet Minkowskian helicoid geometry. For strings stretched into the transverse dimension, we find probability distribution with slope linear in m_T similar to the statistical models but without any thermalization assumptions.

  10. IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL

    Institute of Scientific and Technical Information of China (English)

    Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao

    2004-01-01

    The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.

  11. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...

  12. Order Parameters of the Dilute A Models

    CERN Document Server

    Warnaar, S O; Seaton, K A; Nienhuis, B

    1993-01-01

    The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.

  13. Testing Linear Models for Ability Parameters in Item Response Models

    NARCIS (Netherlands)

    Glas, Cees A.W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like

  14. Modelling spin Hamiltonian parameters of molecular nanomagnets.

    Science.gov (United States)

    Gupta, Tulika; Rajaraman, Gopalan

    2016-07-12

    Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs.

  15. Systematic parameter inference in stochastic mesoscopic modeling

    CERN Document Server

    Lei, Huan; Li, Zhen; Karniadakis, George

    2016-01-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....

  16. Modeling the Effects of Spaceflight on the Posterior Eye in VIIP

    Science.gov (United States)

    Ethier, C. R.; Feola, A. J.; Raykin, J.; Mulugeta, L.; Gleason, R.; Myers, J. G.; Nelson, E. S.; Samuels, B.

    2015-01-01

    Purpose: Visual Impairment and Intracranial Pressure (VIIP) syndrome is a new and significant health concern for long-duration space missions. Its etiology is unknown, but is thought to involve elevated intracranial pressure (ICP)that induces connective tissue changes and remodeling in the posterior eye (Alexander et al. 2012). Here we study the acute biomechanical response of the lamina cribrosa (LC) and optic nerve to elevations in ICP utilizing finite element (FE) modeling. Methods: Using the geometry of the posterior eye from previous axisymmetric FE models (Sigal et al. 2004), we added an elongated optic nerve and optic nerve sheath, including the pia and dura. Tissues were modeled as linear elastic solids. Intraocular pressure and central retinal vessel pressures were set at 15 mmHg and 55 mmHg, respectively. ICP varied from 0 mmHg (suitable for standing on earth) to 30 mmHg (representing severe intracranial hypertension, thought to occur in space flight). We focused on strains and deformations in the LC and optic nerve (within 1 mm of the LC) since we hypothesize that they may contribute to vision loss in VIIP. Results: Elevating ICP from 0 to 30 mmHg significantly altered the strain distributions in both the LC and optic nerve (Figure), notably leading to more extreme strain values in both tension and compression. Specifically, the extreme (95th percentile) tensile strains in the LC and optic nerve increased by 2.7- and 3.8-fold, respectively. Similarly, elevation of ICP led to a 2.5- and 3.3-fold increase in extreme (5th percentile) compressive strains in the LC and optic nerve, respectively. Conclusions: The elevated ICP thought to occur during spaceflight leads to large acute changes in the biomechanical environment of the LC and optic nerve, and we hypothesize that such changes can activate mechanosensitive cells and invoke tissue remodeling. These simulations provide a foundation for more comprehensive studies of microgravity effects on human vision, e

  17. Efficient posterior exploration of a high-dimensional groundwater model from two-stage MCMC simulation and polynomial chaos expansion

    NARCIS (Netherlands)

    Laloy, E.; Rogiers, B.; Vrugt, J.A.; Mallants, D.; Jacques, D.

    2013-01-01

    This study reports on two strategies for accelerating posterior inference of a highly parameterized and CPU-demanding groundwater flow model. Our method builds on previous stochastic collocation approaches, e.g., Marzouk and Xiu (2009) and Marzouk and Najm (2009), and uses generalized polynomial

  18. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  19. Estimation of Model and Parameter Uncertainty For A Distributed Rainfall-runoff Model

    Science.gov (United States)

    Engeland, K.

    The distributed rainfall-runoff model Ecomag is applied as a regional model for nine catchments in the NOPEX area in Sweden. Ecomag calculates streamflow on a daily time resolution. The posterior distribution of the model parameters is conditioned on the observed streamflow in all nine catchments, and calculated using Bayesian statistics. The distribution is estimated by Markov chain Monte Carlo (MCMC). The Bayesian method requires a definition of the likelihood of the parameters. Two alter- native formulations are used. The first formulation is a subjectively chosen objective function describing the goodness of fit between the simulated and observed streamflow as it is used in the GLUE framework. The second formulation is to use a more statis- tically correct likelihood function that describes the simulation errors. The simulation error is defined as the difference between log-transformed observed and simulated streamflows. A statistical model for the simulation errors is constructed. Some param- eters are dependent on the catchment, while others depend on climate. The statistical and the hydrological parameters are estimated simultaneously. Confidence intervals, due to the uncertainty of the Ecomag parameters, for the simulated streamflow are compared for the two likelihood functions. Confidence intervals based on the statis- tical model for the simulation errors are also calculated. The results indicate that the parameter uncertainty depends on the formulation of the likelihood function. The sub- jectively chosen likelihood function gives relatively wide confidence intervals whereas the 'statistical' likelihood function gives more narrow confidence intervals. The statis- tical model for the simulation errors indicates that the structural errors of the model are as least as important as the parameter uncertainty.

  20. Parameter estimation, model reduction and quantum filtering

    Science.gov (United States)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  1. Using Bayesian hierarchical parameter estimation to assess the generalizability of cognitive models of choice.

    Science.gov (United States)

    Scheibehenne, Benjamin; Pachur, Thorsten

    2015-04-01

    To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.

  2. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  3. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    Science.gov (United States)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  4. Parameter optimization in S-system models

    Directory of Open Access Journals (Sweden)

    Vasconcelos Ana

    2008-04-01

    Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.

  5. Modeling of Parameters of Subcritical Assembly SAD

    CERN Document Server

    Petrochenkov, S; Puzynin, I

    2005-01-01

    The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.

  6. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    OpenAIRE

    Baker Syed; Poskar C; Junker Björn

    2011-01-01

    Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...

  7. Probabilities of exoplanet signals from posterior samplings

    CERN Document Server

    Tuomi, Mikko

    2011-01-01

    Estimating the marginal likelihoods is an essential feature of model selection in the Bayesian context. It is especially crucial to have good estimates when assessing the number of planets orbiting stars when the models explain the noisy data with different numbers of Keplerian signals. We introduce a simple method for approximating the marginal likelihoods in practice when a statistically representative sample from the parameter posterior density is available. We use our truncated posterior mixture estimate to receive accurate model probabilities for models with differing number of Keplerian signals in radial velocity data. We test this estimate in simple scenarios to assess its accuracy and rate of convergence in practice when the corresponding estimates calculated using deviance information criterion can be applied to receive trustworthy results for reliable comparison. As a test case, we determine the posterior probability of a planet orbiting HD 3651 given Lick and Keck radial velocity data. The posterio...

  8. Moose models with vanishing $S$ parameter

    CERN Document Server

    Casalbuoni, R; Dominici, Daniele

    2004-01-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the $S$ parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on $K$ SU(2) gauge groups, $K+1$ chiral fields and electroweak groups $SU(2)_L$ and $U(1)_Y$ at the ends of the chain of the moose. $S$ vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical non local field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of $S$ through an exponential behavior of the link couplings as suggested by Randall Sundrum metric.

  9. Model parameters for simulation of physiological lipids

    Science.gov (United States)

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  10. Model surgery technique for Le Fort I osteotomy--alteration in occlusal plane associated with upward transposition of posterior maxilla.

    Science.gov (United States)

    Yosano, Akira; Yamamoto, Masae; Shouno, Takahiro; Shiiki, Sayaka; Hamase, Maki; Kasahara, Kiyohiro; Takaki, Takashi; Takano, Nobuo; Uchiyama, Takeshi; Shibahara, Takahiko

    2005-08-01

    It is difficult to translate analytical values into accurate model surgery by traditional methods, especially when moving the posterior maxilla. This is because cephalometric radiographic analysis generated information on movement of the posterior nasal spine (PNS) can not be recreated in model surgery. Therefore, we propose a method that accurately reflects such analysis and simulation of movement using Quick Ceph 2000 (Orthodontic Processing Corporation, USA). This will allow the enrichment of model surgery prior to actual surgery in cases where upward movement of the posterior maxilla is involved. All patients who participated in this study had skeletal mandibular prognathism characterized by a small occlusal plane angle in respect to the S-N plane. Cephalometric radiographs were taken and analyzed with the Quick Ceph 2000. Pre- and post-surgical evaluations were performed using Sassouni arc analysis and Ricketts analysis. Prior to transposition, we then prepared an anterior occlusal bite record on a model mounted on an articulator. This bite was then used as a reference when the molar parts were to be transposed upwards. The use of a occlusal bite permitted an accurate translation of the preoperative computer simulation into model surgery, thus facilitating favorable surgical results.

  11. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    CERN Document Server

    Moore, Christopher J

    2014-01-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this work a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalised over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform...

  12. Uncertainty Quantification for Optical Model Parameters

    CERN Document Server

    Lovell, A E; Sarich, J; Wild, S M

    2016-01-01

    Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...

  13. Numerical modeling of partial discharges parameters

    Directory of Open Access Journals (Sweden)

    Kartalović Nenad M.

    2016-01-01

    Full Text Available In recent testing of the partial discharges or the use for the diagnosis of insulation condition of high voltage generators, transformers, cables and high voltage equipment develops rapidly. It is a result of the development of electronics, as well as, the development of knowledge about the processes of partial discharges. The aim of this paper is to contribute the better understanding of this phenomenon of partial discharges by consideration of the relevant physical processes in isolation materials and isolation systems. Prebreakdown considers specific processes, and development processes at the local level and their impact on specific isolation material. This approach to the phenomenon of partial discharges needed to allow better take into account relevant discharge parameters as well as better numerical model of partial discharges.

  14. Posterior capsular opacification and intraocular lens decentration. Part I: Comparison of various posterior chamber lens designs implanted in the rabbit model.

    Science.gov (United States)

    Hansen, S O; Solomon, K D; McKnight, G T; Wilbrandt, T H; Gwin, T D; O'Morchoe, D J; Tetz, M R; Apple, D J

    1988-11-01

    Experimental phacoemulsification procedures were performed in 54 Rex rabbits. In 96 eyes, posterior chamber intraocular lenses (IOLs) were implanted in the capsular sac, and 12 eyes served as controls with no lens implantation. The IOLs were divided into eight groups consisting of both one-piece and three-piece styles with various optic designs. Each lens was evaluated for the relative effect on posterior capsular opacification (PCO) and optic decentration, two of the most common complications of modern cataract surgery and IOL implantation. Optics with a convex-anterior, plano-posterior design (the type of IOL optic most frequently implanted today) had the highest incidence of PCO. With capsular fixated IOLs, the features that have a statistically significant impact on reducing PCO include (1) one-piece, all-polymethylmethacrylate (PMMA) IOL styles, (2) a biconvex or posterior convex optic design, and (3) angulated loops. Lens decentration was not affected by the optic design, but statistical analysis showed that one-piece, all-PMMA IOL construction provided the most consistent centration.

  15. HIV model parameter estimates from interruption trial data including drug efficacy and reservoir dynamics.

    Directory of Open Access Journals (Sweden)

    Rutao Luo

    Full Text Available Mathematical models based on ordinary differential equations (ODE have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3-5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients.

  16. Parameter Optimisation for the Behaviour of Elastic Models over Time

    DEFF Research Database (Denmark)

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....

  17. Model Identification of Linear Parameter Varying Aircraft Systems

    OpenAIRE

    Fujimore, Atsushi; Ljung, Lennart

    2007-01-01

    This article presents a parameter estimation of continuous-time polytopic models for a linear parameter varying (LPV) system. The prediction error method of linear time invariant (LTI) models is modified for polytopic models. The modified prediction error method is applied to an LPV aircraft system whose varying parameter is the flight velocity and model parameters are the stability and control derivatives (SCDs). In an identification simulation, the polytopic model is more suitable for expre...

  18. Restoration of anterior-posterior rotator cuff force balance improves shoulder function in a rat model of chronic massive tears.

    Science.gov (United States)

    Hsu, Jason E; Reuther, Katherine E; Sarver, Joseph J; Lee, Chang Soo; Thomas, Stephen J; Glaser, David L; Soslowsky, Louis J

    2011-07-01

    The rotator cuff musculature imparts dynamic stability to the glenohumeral joint. In particular, the balance between the subscapularis anteriorly and the infraspinatus posteriorly, often referred to as the rotator cuff "force couple," is critical for concavity compression and concentric rotation of the humeral head. Restoration of this anterior-posterior force balance after chronic, massive rotator cuff tears may allow for deltoid compensation, but no in vivo studies have quantitatively demonstrated an improvement in shoulder function. Our goal was to determine if restoring this balance of forces improves shoulder function after two-tendon rotator cuff tears in a rat model. Forty-eight rats underwent detachment of the supraspinatus and infraspinatus. After four weeks, rats were randomly assigned to three groups: no repair, infraspinatus repair, and two-tendon repair. Quantitative ambulatory measures including medial/lateral forces, braking, propulsion, and step width were significantly different between the infraspinatus and no repair group and similar between the infraspinatus and two-tendon repair groups at almost all time points. These results suggest that repairing the infraspinatus back to its insertion site without repair of the supraspinatus can improve shoulder function to a level similar to repairing both the infraspinatus and supraspinatus tendons. Clinically, a partial repair of the posterior cuff after a two-tendon tear may be sufficient to restore adequate function. An in vivo model system for two-tendon repair of massive rotator cuff tears is presented. Copyright © 2011 Orthopaedic Research Society.

  19. Restoration of Anterior-Posterior Rotator Cuff Force Balance Improves Shoulder Function in a Rat Model of Chronic Massive Tears

    Science.gov (United States)

    Hsu, Jason E.; Reuther, Katherine E.; Sarver, Joseph J.; Lee, Chang Soo; Thomas, Stephen J.; Glaser, David L.; Soslowsky, Louis J.

    2011-01-01

    The rotator cuff musculature imparts dynamic stability to the glenohumeral joint. In particular, the balance between the subscapularis anteriorly and the infraspinatus posteriorly, often referred to as the rotator cuff “force couple,” is critical for concavity compression and concentric rotation of the humeral head. Restoration of this anterior-posterior force balance after chronic, massive rotator cuff tears may allow for deltoid compensation, but no in vivo studies have quantitatively demonstrated an improvement in shoulder function. Our goal was to determine if restoring this balance of forces improves shoulder function after two-tendon rotator cuff tears in a rat model. Forty-eight rats underwent detachment of the supraspinatus and infraspinatus. After four weeks, rats were randomly assigned to three groups: no repair, infraspinatus repair, and two-tendon repair. Quantitative ambulatory measures including medial/lateral forces, braking, propulsion, and step width were significantly different between the infraspinatus and no repair group and similar between the infraspinatus and two-tendon repair groups at almost all time points. These results suggest that repairing the infraspinatus back to its insertion site without repair of the supraspinatus can improve shoulder function to a level similar to repairing both the infraspinatus and supraspinatus tendons. Clinically, a partial repair of the posterior cuff after a two tendon tear may be sufficient to restore adequate function. An in vivo model system for two-tendon repair of massive rotator cuff tears is presented. PMID:21308755

  20. Probabilistic Fatigue Life Prediction of Turbine Disc Considering Model Parameter Uncertainty

    Science.gov (United States)

    He, Liping; Yu, Le; Zhu, Shun-Peng; Ding, Liangliang; Huang, Hong-Zhong

    2016-06-01

    Aiming to improve the predictive ability of Walker model for fatigue life prediction and taking the turbine disc alloy GH4133 as the application example, this paper investigates a new approach for probabilistic fatigue life prediction when considering parameter uncertainty inherent in the life prediction model. Firstly, experimental data are used to update the model parameters using Bayes' theorem, so as to obtain the posterior probability distribution functions of two parameters of the Walker model, as well to achieve the probabilistic life prediction model for turbine disc. During the updating process, Markov Chain Monte Carlo (MCMC) technique is used to generate samples of the given distribution and estimating the parameters distinctly. After that, the turbine disc life is predicted using the probabilistic Walker model based on Monte Carlo simulation technique. The experimental results indicate that: (1) after using the small sample test data obtained from turbine disc, parameter uncertainty of the Walker model can be quantified and the corresponding probabilistic model for fatigue life prediction can be established using Bayes' theorem; (2) there exists obvious dispersion of life data for turbine disc when predicting fatigue life in practical engineering application.

  1. [Calculation of parameters in forest evapotranspiration model].

    Science.gov (United States)

    Wang, Anzhi; Pei, Tiefan

    2003-12-01

    Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.

  2. A Bayesian-based multilevel factorial analysis method for analyzing parameter uncertainty of hydrological model

    Science.gov (United States)

    Liu, Y. R.; Li, Y. P.; Huang, G. H.; Zhang, J. L.; Fan, Y. R.

    2017-10-01

    In this study, a Bayesian-based multilevel factorial analysis (BMFA) method is developed to assess parameter uncertainties and their effects on hydrological model responses. In BMFA, Differential Evolution Adaptive Metropolis (DREAM) algorithm is employed to approximate the posterior distributions of model parameters with Bayesian inference; factorial analysis (FA) technique is used for measuring the specific variations of hydrological responses in terms of posterior distributions to investigate the individual and interactive effects of parameters on model outputs. BMFA is then applied to a case study of the Jinghe River watershed in the Loess Plateau of China to display its validity and applicability. The uncertainties of four sensitive parameters, including soil conservation service runoff curve number to moisture condition II (CN2), soil hydraulic conductivity (SOL_K), plant available water capacity (SOL_AWC), and soil depth (SOL_Z), are investigated. Results reveal that (i) CN2 has positive effect on peak flow, implying that the concentrated rainfall during rainy season can cause infiltration-excess surface flow, which is an considerable contributor to peak flow in this watershed; (ii) SOL_K has positive effect on average flow, implying that the widely distributed cambisols can lead to medium percolation capacity; (iii) the interaction between SOL_AWC and SOL_Z has noticeable effect on the peak flow and their effects are dependent upon each other, which discloses that soil depth can significant influence the processes of plant uptake of soil water in this watershed. Based on the above findings, the significant parameters and the relationship among uncertain parameters can be specified, such that hydrological model's capability for simulating/predicting water resources of the Jinghe River watershed can be improved.

  3. Investigating spatial differentiation of model parameters in a carbon cycle data assimilation system

    Science.gov (United States)

    Ziehn, T.; Knorr, W.; Scholze, M.

    2011-06-01

    Better estimates of the net exchange of CO2 between the atmosphere and the terrestrial biosphere are urgently needed to improve predictions of future CO2 levels in the atmosphere. The carbon cycle data assimilation system (CCDAS) offers the capability of inversion, while it is at the same time based on a process model that can be used independent of observational data. CCDAS allows the assimilation of atmospheric CO2 concentrations into the terrestrial biosphere model BETHY, constraining its process parameters via an adjoint approach. Here, we investigate the effect of spatial differentiation of a universal carbon balance parameter of BETHY on posterior net CO2 fluxes and their uncertainties. The parameter, β, determines the characteristics of the slowly decomposing soil carbon pool and represents processes that are difficult to model explicitly. Two cases are studied with an assimilation period of 1979 to 2003. In the base case, there is a separate β for each plant functional type (PFT). In the regionalization case, β is differentiated not only by PFT, but also according to each of 11 large continental regions as used by the TransCom project. We find that the choice of spatial differentiation has a profound impact not only on the posterior (optimized) fluxes and their uncertainties, but even more so on the spatial covariance of the uncertainties. Differences are most pronounced in tropical regions, where observations are sparse. While regionalization leads to an improved fit to the observations by about 20% compared to the base case, we notice large spatial variations in the posterior net CO2 flux on a grid cell level. The results illustrate the need for universal process formulations in global-scale atmospheric CO2 inversion studies, at least as long as the observational network is too sparse to resolve spatial fluctuations at the regional scale.

  4. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty.

    Science.gov (United States)

    Schiavazzi, Daniele E; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L

    2017-03-01

    Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Transfer function modeling of damping mechanisms in distributed parameter models

    Science.gov (United States)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  6. On the modeling of internal parameters in hyperelastic biological materials

    CERN Document Server

    Giantesio, Giulia

    2016-01-01

    This paper concerns the behavior of hyperelastic energies depending on an internal parameter. First, the situation in which the internal parameter is a function of the gradient of the deformation is presented. Second, two models where the parameter describes the activation of skeletal muscle tissue are analyzed. In those models, the activation parameter depends on the strain and it is important to consider the derivative of the parameter with respect to the strain in order to capture the proper behavior of the stress.

  7. Determining extreme parameter correlation in ground water models

    DEFF Research Database (Denmark)

    Hill, Mary Cole; Østerby, Ole

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters...

  8. Model comparisons and genetic and environmental parameter ...

    African Journals Online (AJOL)

    arc

    South African Journal of Animal Science 2005, 35 (1) ... Genetic and environmental parameters were estimated for pre- and post-weaning average daily gain ..... and BWT (and medium maternal genetic correlations) indicates that these traits ...

  9. NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model

    OpenAIRE

    Marković, Darija

    2009-01-01

    In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...

  10. Parameter optimization model in electrical discharge machining process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.

  11. Computational modeling of drug distribution in the posterior segment of the eye: effects of device variables and positions.

    Science.gov (United States)

    Jooybar, Elaheh; Abdekhodaie, Mohammad J; Farhadi, Fatolla; Cheng, Yu-Ling

    2014-09-01

    A computational model was developed to simulate drug distribution in the posterior segment of the eye after intravitreal injection and ocular implantation. The effects of important factors in intravitreal injection such as injection time, needle gauge and needle angle on the ocular drug distribution were studied. Also, the influences of the position and the type of implant on the concentration profile in the posterior segment were investigated. Computational Fluid Dynamics (CFD) calculations were conducted to describe the 3D convective-diffusive transport. The geometrical model was constructed based on the human eye dimensions. To simulate intravitreal injection, unlike previous studies which considered the initial shape of the injected drug solution as a sphere or cylinder, the more accurate shape was obtained by level-set method in COMSOL. The results showed that in intravitreal injection the drug concentration profile and its maximum value depended on the injection time, needle gauge and penetration angle of the needle. Considering the actual shape of the injected solution was found necessary to obtain the real concentration profile. In implant insertion, the vitreous cavity received more drugs after intraocular implantation, but this method was more invasive compared to the periocular delivery. Locating the implant in posterior or anterior regions had a significant effect on local drug concentrations. Also, the shape of implant influenced on concentration profile inside the eye. The presented model is useful for optimizing the administration variables to ensure optimum therapeutic benefits. Predicting and quantifying different factors help to reduce the possibility of tissue toxicity and to improve the treatment efficiency.

  12. Sensitivity of a Shallow-Water Model to Parameters

    CERN Document Server

    Kazantsev, Eugene

    2011-01-01

    An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...

  13. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  14. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;

    2008-01-01

    Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...

  15. Knee model sensitivity to cruciate ligaments parameters: a stability simulation study for a living subject.

    Science.gov (United States)

    Bertozzi, Luigi; Stagni, Rita; Fantozzi, Silvia; Cappello, Angelo

    2007-01-01

    If the biomechanic function of the different anatomical sub-structures of the knee joint was needed in physiological conditions, the only possible way is a modelling approach. Subject-specific geometries and kinematic data, acquired from the same living subject, were the foundations of the 3D quasi-static knee model developed. Each cruciate ligament was modelled by means of 25 elastic springs, paying attention to the anatomical twisting of the fibres. The sensitivity of the model to the cross-sectional area was performed during the anterior/posterior tibial translations, the sensitivity to all the cruciate ligaments parameters was performed during the internal/external rotations. The model reproduced very well the mechanical behaviour reported in literature during anterior/posterior translations, in particular considering 30% of the mean insertional area. During the internal/external tibial rotations, similar behaviour of the axial torques was obtained in the three sensitivity analyses. The overlapping of the ligaments was assessed at about 25 degrees of internal axial rotation. The presented model featured a good level of accuracy in combination with a low computational weight, and it could provide an in vivo estimation of the role of the cruciate ligaments during the execution of daily living activities.

  16. Compositional modelling of distributed-parameter systems

    NARCIS (Netherlands)

    Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.

    2005-01-01

    The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the

  17. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  18. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  19. Proton Treatment Techniques for Posterior Fossa Tumors: Consequences for Linear Energy Transfer and Dose-Volume Parameters for the Brainstem and Organs at Risk.

    Science.gov (United States)

    Giantsoudi, Drosoula; Adams, Judith; MacDonald, Shannon M; Paganetti, Harald

    2017-02-01

    In proton therapy of posterior fossa tumors, at least partial inclusion of the brainstem in the target is necessary because of its proximity to the tumor and required margins. Additionally, the preferred beam geometry results in directing the field distal edge toward this critical structure, raising concerns for brainstem toxicity. Some treatment techniques place the beam's distal edge within the brainstem (dose-sparing techniques), and others avoid elevated linear energy transfer (LET) of the proton field by placing the distal edge beyond it (LET-sparing techniques). Hybrid approaches are also being used. We examine the dosimetric efficacy of these techniques, accounting for LET-dependent and dose-dependent variable relative biologic effectiveness (RBE) distributions. Six techniques were applied in ependymoma cases: (a) 3-field dose-sparing; (b) 3-field LET-sparing; (c) 2-field dose-sparing, wide angles; (d) 2-field LET-sparing, wide angles; (e) 2-field LET-sparing, steep angles; and (f) 2-field LET-sparing with feathered distal end. Monte Carlo calculated dose, LET, and RBE-weighted dose distributions were compared. Decreased LET values in the brainstem by LET-sparing techniques were accompanied by higher, not statistically significant, median dose: 53.6 Gy(RBE), 53.4 Gy(RBE), and 54.3 Gy(RBE) for techniques (b), (d), and (e) versus 52.1 Gy(RBE) for technique (a). Accounting for variable RBE distributions, the brainstem volume receiving at least 55 Gy(RBE) increased from 72.5% for technique (a) to 80.3% for (b) (Ptechnique (c) to 77.6% for (d) (Ptechniques compared with the corresponding dose-sparing (P=.03 and .004). Extending the proton range beyond the brainstem to reduce LET results in clinically comparable maximum radiobiologic effective dose to this sensitive structure. However this method significantly increasing the brainstem volume receiving RBE-weighted dose higher than 55 Gy(RBE) with possible consequences based on known dose-volume parameters

  20. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  1. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  2. Parameter redundancy in discrete state‐space and integrated models

    Science.gov (United States)

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  3. Parameter redundancy in discrete state-space and integrated models.

    Science.gov (United States)

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  5. Ternary interaction parameters in calphad solution models

    Energy Technology Data Exchange (ETDEWEB)

    Eleno, Luiz T.F., E-mail: luizeleno@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica; Schön, Claudio G., E-mail: schoen@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Computational Materials Science Laboratory. Department of Metallurgical and Materials Engineering

    2014-07-01

    For random, diluted, multicomponent solutions, the excess chemical potentials can be expanded in power series of the composition, with coefficients that are pressure- and temperature-dependent. For a binary system, this approach is equivalent to using polynomial truncated expansions, such as the Redlich-Kister series for describing integral thermodynamic quantities. For ternary systems, an equivalent expansion of the excess chemical potentials clearly justifies the inclusion of ternary interaction parameters, which arise naturally in the form of correction terms in higher-order power expansions. To demonstrate this, we carry out truncated polynomial expansions of the excess chemical potential up to the sixth power of the composition variables. (author)

  6. Towards a Bayesian total error analysis of conceptual rainfall-runoff models: Characterising model error using storm-dependent parameters

    Science.gov (United States)

    Kuczera, George; Kavetski, Dmitri; Franks, Stewart; Thyer, Mark

    2006-11-01

    SummaryCalibration and prediction in conceptual rainfall-runoff (CRR) modelling is affected by the uncertainty in the observed forcing/response data and the structural error in the model. This study works towards the goal of developing a robust framework for dealing with these sources of error and focuses on model error. The characterisation of model error in CRR modelling has been thwarted by the convenient but indefensible treatment of CRR models as deterministic descriptions of catchment dynamics. This paper argues that the fluxes in CRR models should be treated as stochastic quantities because their estimation involves spatial and temporal averaging. Acceptance that CRR models are intrinsically stochastic paves the way for a more rational characterisation of model error. The hypothesis advanced in this paper is that CRR model error can be characterised by storm-dependent random variation of one or more CRR model parameters. A simple sensitivity analysis is used to identify the parameters most likely to behave stochastically, with variation in these parameters yielding the largest changes in model predictions as measured by the Nash-Sutcliffe criterion. A Bayesian hierarchical model is then formulated to explicitly differentiate between forcing, response and model error. It provides a very general framework for calibration and prediction, as well as for testing hypotheses regarding model structure and data uncertainty. A case study calibrating a six-parameter CRR model to daily data from the Abercrombie catchment (Australia) demonstrates the considerable potential of this approach. Allowing storm-dependent variation in just two model parameters (with one of the parameters characterising model error and the other reflecting input uncertainty) yields a substantially improved model fit raising the Nash-Sutcliffe statistic from 0.74 to 0.94. Of particular significance is the use of posterior diagnostics to test the key assumptions about the data and model errors

  7. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  8. Modeling of mouse eye and errors in ocular parameters affecting refractive state

    Science.gov (United States)

    Bawa, Gurinder

    Rodents eye are particularly used to study refractive error state of an eye and development of refractive eye. Genetic organization of rodents is similar to that of humans, which makes them interesting candidates to be researched upon. From rodents family mice models are encouraged over rats because of availability of genetically engineered models. Despite of extensive work that has been performed on mice and rat models, still no one is able to quantify an optical model, due to variability in the reported ocular parameters. In this Dissertation, we have extracted ocular parameters and generated schematics of eye from the raw data from School of Medicine, Detroit. In order to see how the rays would travel through an eye and the defects associated with an eye; ray tracing has been performed using ocular parameters. Finally we have systematically evaluated the contribution of various ocular parameters, such as radii of curvature of ocular surfaces, thicknesses of ocular components, and refractive indices of ocular refractive media, using variational analysis and a computational model of the rodent eye. Variational analysis revealed that variation in all the ocular parameters does affect the refractive status of the eye, but depending upon the magnitude of the impact those parameters are listed as critical or non critical. Variation in the depth of the vitreous chamber, thickness of the lens, radius of the anterior surface of the cornea, radius of the anterior surface of the lens, as well as refractive indices for the lens and vitreous, appears to have the largest impact on the refractive error and thus are categorized as critical ocular parameters. The radii of the posterior surfaces of the cornea and lens have much smaller contributions to the refractive state, while the radii of the anterior and posterior surfaces of the retina have no effect on the refractive error. These data provide the framework for further refinement of the optical models of the rat and mouse

  9. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  10. GIS-Based Hydrogeological-Parameter Modeling

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A regression model is proposed to relate the variation of water well depth with topographic properties (area and slope), the variation of hydraulic conductivity and vertical decay factor. The implementation of this model in GIS environment (ARC/TNFO) based on known water data and DEM is used to estimate the variation of hydraulic conductivity and decay factor of different lithoiogy units in watershed context.

  11. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens;

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...

  12. Mirror symmetry for two parameter models, 2

    CERN Document Server

    Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison

    1994-01-01

    We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.

  13. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    Science.gov (United States)

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  14. On linear models and parameter identifiability in experimental biological systems.

    Science.gov (United States)

    Lamberton, Timothy O; Condon, Nicholas D; Stow, Jennifer L; Hamilton, Nicholas A

    2014-10-07

    A key problem in the biological sciences is to be able to reliably estimate model parameters from experimental data. This is the well-known problem of parameter identifiability. Here, methods are developed for biologists and other modelers to design optimal experiments to ensure parameter identifiability at a structural level. The main results of the paper are to provide a general methodology for extracting parameters of linear models from an experimentally measured scalar function - the transfer function - and a framework for the identifiability analysis of complex model structures using linked models. Linked models are composed by letting the output of one model become the input to another model which is then experimentally measured. The linked model framework is shown to be applicable to designing experiments to identify the measured sub-model and recover the input from the unmeasured sub-model, even in cases that the unmeasured sub-model is not identifiable. Applications for a set of common model features are demonstrated, and the results combined in an example application to a real-world experimental system. These applications emphasize the insight into answering "where to measure" and "which experimental scheme" questions provided by both the parameter extraction methodology and the linked model framework. The aim is to demonstrate the tools' usefulness in guiding experimental design to maximize parameter information obtained, based on the model structure.

  15. A Computational Model for Spatial Navigation Based on Reference Frames in the Hippocampus, Retrosplenial Cortex, and Posterior Parietal Cortex

    Science.gov (United States)

    Oess, Timo; Krichmar, Jeffrey L.; Röhrbein, Florian

    2017-01-01

    Behavioral studies for humans, monkeys, and rats have shown that, while traversing an environment, these mammals tend to use different frames of reference and frequently switch between them. These frames represent allocentric, egocentric, or route-centric views of the environment. However, combinations of either of them are often deployed. Neurophysiological studies on rats have indicated that the hippocampus, the retrosplenial cortex, and the posterior parietal cortex contribute to the formation of these frames and mediate the transformation between those. In this paper, we construct a computational model of the posterior parietal cortex and the retrosplenial cortex for spatial navigation. We demonstrate how the transformation of reference frames could be realized in the brain and suggest how different brain areas might use these reference frames to form navigational strategies and predict under what conditions an animal might use a specific type of reference frame. Our simulated navigation experiments demonstrate that the model’s results closely resemble behavioral findings in humans and rats. These results suggest that navigation strategies may depend on the animal’s reliance in a particular reference frame and shows how low confidence in a reference frame can lead to fluid adaptation and deployment of alternative navigation strategies. Because of its flexibility, our biologically inspired navigation system may be applied to autonomous robots. PMID:28223931

  16. Kisspeptin mRNA expression is increased in the posterior hypothalamus in the rat model of polycystic ovary syndrome.

    Science.gov (United States)

    Matsuzaki, Toshiya; Tungalagsuvd, Altankhuu; Iwasa, Takeshi; Munkhzaya, Munkhsaikhan; Yanagihara, Rie; Tokui, Takako; Yano, Kiyohito; Mayila, Yiliyasi; Kato, Takeshi; Kuwahara, Akira; Matsui, Sumika; Irahara, Minoru

    2017-01-30

    Hypersecretion of luteinizing hormone (LH) is a common endocrinological finding of polycystic ovary syndrome (PCOS). This derangement might have a close relationship with hypothalamic kisspeptin expression that is thought to be a key regulator of gonadotropin-releasing hormone (GnRH). We evaluated the relationship between the hypothalamic-pituitary-gonadal axis (HPG axis) and kisspeptin using a rat model of PCOS induced by letrozole. Letrozole pellets (0.4 mg/day) and control pellets were placed subcutaneously onto the backs of 3-week-old female Wistar rats. Body weight, vaginal opening and vaginal smear were checked daily. Blood and tissues of ovary, uterus and brain were collected at 12-weeks of age. An hypothalamic block was cut into anterior and posterior blocks, which included the anteroventral periventricular nucleus (AVPV) and the arcuate nucleus (ARC), respectively, in order to estimate hypothalamic kisspeptin expression in each area. The letrozole group showed a similar phenotype to human PCOS such as heavier body weight, heavier ovary, persistent anovulatory state, multiple enlarged follicles with no corpus luteum and higher LH and testosterone (T) levels compared to the control group. Kisspeptin mRNA expression in the posterior hypothalamic block including ARC was higher in the letrozole group than in the control group although its expression in the anterior hypothalamic block was similar between groups. These results suggest that enhanced KNDy neuron activity in ARC contributes to hypersecretion of LH in PCOS and might be a therapeutic target to rescue ovulatory disorder of PCOS in the future.

  17. CHAMP: Changepoint Detection Using Approximate Model Parameters

    Science.gov (United States)

    2014-06-01

    positions as a Markov chain in which the transition probabilities are defined by the time since the last changepoint: p(τi+1 = t|τi = s) = g(t− s), (1...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models (HMMs) are...length α, and maximum number of particles M . Output: Viterbi path of changepoint times and models // Initialize data structures 1: max path, prev queue

  18. Abnormal connection between lateral and posterior semicircular canal revealed by a new modeling process: origin and physiological consequences.

    Science.gov (United States)

    Rousie, Dominique Louise; Deroubaix, Jean Paul; Joly, Olivier; Baudrillard, Jean Claude; Berthoz, Alain

    2009-05-01

    We developed a modeling procedure using CT scans or MRI data for exploring the bony and lymphatic canals of vestibular patients. We submitted 445 patients with instability and spatial de-orientation to this procedure. Out of the 445 patients, 95 had scoliosis, some of them, because malformations were suspected also had CT-scan modeling and functional tests. We focused on a never described, abnormal connection between the lymphatic lateral and posterior canal (LPCC) with a frequency of 67/445 (15%). In the scoliosis subgroup, the frequency was 52/95 (55%). Three scoliotic patients had CT scans. For each of them, the modeling revealed that LPCC was present on the bony canals. LPCC has pathognomic signs: no rotatory vertigo but frequent instability, transport sickness head tilt on the side of the anomaly, and spatial disorientation in new environment. We evaluated the functional impact of LPCC by testing the vestibulo-ocular reflex (VOR) in horizontal and vertical planes and found reproducible abnormal responses: in the case of left LPCC, during a counterclockwise horizontal rotation or a post clockwise horizontal rotation, added to the expected horizontal nystagmus, we found an unexpected upbeat nystagmus induced by the ampullofugal displacement of the fluid in the posterior canal. As LPCC was found in CT scans and MRI modeling for a same subject, we suggest that it could be a congenital abnormal process of ossification of the canals. The responses to the vestibular tests highlighting constant unexpected nystagmus underline the potential functional consequences of LPCC on vestibular perception and scoliosis.

  19. WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...

    African Journals Online (AJOL)

    Preferred Customer

    [3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and .... this case, the coefficient B takes the dimension of a ... In plane-strain problems, the assumption of ... loaded circular region; s is the radial coordinate.

  20. Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models

    Science.gov (United States)

    Bates, Bryson C.

    1992-01-01

    A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.

  1. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  2. On retrial queueing model with fuzzy parameters

    Science.gov (United States)

    Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng

    2007-01-01

    This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.

  3. Solar parameters for modeling interplanetary background

    CERN Document Server

    Bzowski, M; Tokumaru, M; Fujiki, K; Quemerais, E; Lallement, R; Ferron, S; Bochsler, P; McComas, D J

    2011-01-01

    The goal of the Fully Online Datacenter of Ultraviolet Emissions (FONDUE) Working Team of the International Space Science Institute in Bern, Switzerland, was to establish a common calibration of various UV and EUV heliospheric observations, both spectroscopic and photometric. Realization of this goal required an up-to-date model of spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the solar factors shaping the distribution of neutral interstellar H in the heliosphere. Presented are the solar Lyman-alpha flux and the solar Lyman-alpha resonant radiation pressure force acting on neutral H atoms in the heliosphere, solar EUV radiation and the photoionization of heliospheric hydrogen, and their evolution in time and the still hypothetical variation with heliolatitude. Further, solar wind and its evolution with solar activity is presented in the context of the charge excha...

  4. Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models

    CERN Document Server

    Hori, Kentaro

    2013-01-01

    We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...

  5. Parameter identification in tidal models with uncertain boundaries

    NARCIS (Netherlands)

    Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, Paul

    1994-01-01

    In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The

  6. Exploring the interdependencies between parameters in a material model.

    Energy Technology Data Exchange (ETDEWEB)

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  7. An Alternative Three-Parameter Logistic Item Response Model.

    Science.gov (United States)

    Pashley, Peter J.

    Birnbaum's three-parameter logistic function has become a common basis for item response theory modeling, especially within situations where significant guessing behavior is evident. This model is formed through a linear transformation of the two-parameter logistic function in order to facilitate a lower asymptote. This paper discusses an…

  8. Parameter identification in tidal models with uncertain boundaries

    NARCIS (Netherlands)

    Bagchi, Arunabha; Brummelhuis, ten Paul

    1994-01-01

    In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The hyperboli

  9. A compact cyclic plasticity model with parameter evolution

    DEFF Research Database (Denmark)

    Krenk, Steen; Tidemann, L.

    2017-01-01

    , and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...

  10. Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds

    Directory of Open Access Journals (Sweden)

    Indrajeet Chaubey

    2010-11-01

    Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.

  11. NWP model forecast skill optimization via closure parameter variations

    Science.gov (United States)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  12. A multidimensional item response model : Constrained latent class analysis using the Gibbs sampler and posterior predictive checks

    NARCIS (Netherlands)

    Hoijtink, H; Molenaar, IW

    1997-01-01

    In this paper it will be shown that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. The parameters of this latent class model will be estimated using an application of the Gibbs sampler. It will be illust

  13. Some tests for parameter constancy in cointegrated VAR-models

    DEFF Research Database (Denmark)

    Hansen, Henrik; Johansen, Søren

    1999-01-01

    Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...

  14. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  15. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    Directory of Open Access Journals (Sweden)

    Baker Syed

    2011-01-01

    Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.

  16. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    Science.gov (United States)

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-10-11

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.

  17. Femoral Graft-Tunnel Angles in Posterior Cruciate Ligament Reconstruction: Analysis with 3-Dimensional Models and Cadaveric Experiments

    Science.gov (United States)

    Kim, Sung-Jae; Chun, Yong-Min; Moon, Hong-Kyo; Jang, Jae-Won

    2013-01-01

    Purpose The purpose of this study was to compare four graft-tunnel angles (GTA), the femoral GTA formed by three different femoral tunneling techniques (the outside-in, a modified inside-out technique in the posterior sag position with knee hyperflexion, and the conventional inside-out technique) and the tibia GTA in 3-dimensional (3D) knee flexion models, as well as to examine the influence of femoral tunneling techniques on the contact pressure between the intra-articular aperture of the femoral tunnel and the graft. Materials and Methods Twelve cadaveric knees were tested. Computed tomography scans were performed at different knee flexion angles (0°, 45°, 90°, and 120°). Femoral and tibial GTAs were measured at different knee flexion angles on the 3D knee models. Using pressure sensitive films, stress on the graft of the angulation of the femoral tunnel aperture was measured in posterior cruciate ligament reconstructed cadaveric knees. Results Between 45° and 120° of knee flexion, there were no significant differences between the outside-in and modified inside-out techniques. However, the femoral GTA for the conventional inside-out technique was significantly less than that for the other two techniques (p<0.001). In cadaveric experiments using pressure-sensitive film, the maximum contact pressure for the modified inside-out and outside-in technique was significantly lower than that for the conventional inside-out technique (p=0.024 and p=0.017). Conclusion The conventional inside-out technique results in a significantly lesser GTA and higher stress at the intra-articular aperture of the femoral tunnel than the outside-in technique. However, the results for the modified inside-out technique are similar to those for the outside-in technique. PMID:23709438

  18. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    Science.gov (United States)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  19. Identification of hydrological model parameter variation using ensemble Kalman filter

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao

    2016-12-01

    Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.

  20. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  1. Universally sloppy parameter sensitivities in systems biology models.

    Directory of Open Access Journals (Sweden)

    Ryan N Gutenkunst

    2007-10-01

    Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  2. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Directory of Open Access Journals (Sweden)

    Guanqun eZhang

    2011-11-01

    Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.

  3. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    Science.gov (United States)

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  4. Identifiability of parameters and behaviour of MCMC chains: a case study using the reaction norm model.

    Science.gov (United States)

    Shariati, M M; Korsgaard, I R; Sorensen, D

    2009-04-01

    Markov chain Monte Carlo (MCMC) enables fitting complex hierarchical models that may adequately reflect the process of data generation. Some of these models may contain more parameters than can be uniquely inferred from the distribution of the data, causing non-identifiability. The reaction norm model with unknown covariates (RNUC) is a model in which unknown environmental effects can be inferred jointly with the remaining parameters. The problem of identifiability of parameters at the level of the likelihood and the associated behaviour of MCMC chains were discussed using the RNUC as an example. It was shown theoretically that when environmental effects (covariates) are considered as random effects, estimable functions of the fixed effects, (co)variance components and genetic effects are identifiable as well as the environmental effects. When the environmental effects are treated as fixed and there are other fixed factors in the model, the contrasts involving environmental effects, the variance of environmental sensitivities (genetic slopes) and the residual variance are the only identifiable parameters. These different identifiability scenarios were generated by changing the formulation of the model and the structure of the data and the models were then implemented via MCMC. The output of MCMC sampling schemes was interpreted in the light of the theoretical findings. The erratic behaviour of the MCMC chains was shown to be associated with identifiability problems in the likelihood, despite propriety of posterior distributions, achieved by arbitrarily chosen uniform (bounded) priors. In some cases, very long chains were needed before the pattern of behaviour of the chain may signal the existence of problems. The paper serves as a warning concerning the implementation of complex models where identifiability problems can be difficult to detect a priori. We conclude that it would be good practice to experiment with a proposed model and to understand its features

  5. Parameter estimation and investigation of a bolted joint model

    Science.gov (United States)

    Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.

    2007-11-01

    Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.

  6. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  7. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  8. Novel method for incorporating model uncertainties into gravitational wave parameter estimates.

    Science.gov (United States)

    Moore, Christopher J; Gair, Jonathan R

    2014-12-19

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  9. Identification of parameters of discrete-continuous models

    Energy Technology Data Exchange (ETDEWEB)

    Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  10. Estimating parameters for generalized mass action models with connectivity information

    Directory of Open Access Journals (Sweden)

    Voit Eberhard O

    2009-05-01

    Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out

  11. Eliciting hyperparameters of prior distributions for the parameters of paired comparison models

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2013-02-01

    Full Text Available Normal 0 false false false EN-US X-NONE AR-SA In the study of paired comparisons (PC, items may be ranked or issues may be prioritized through subjective assessment of certain judges. PC models are developed and then used to serve the purpose of ranking. The PC models may be studied through classical or Bayesian approach. Bayesian inference is a modern statistical technique used to draw conclusions about the population parameters. Its beauty lies in incorporating prior information about the parameters into the analysis in addition to current information (i.e. data. The prior and current information are formally combined to yield a posterior distribution about the population parameters, which is the work bench of the Bayesian statisticians. However, the problems the Bayesians face correspond to the selection and formal utilization of prior distribution. Once the type of prior distribution is decided to be used, the problem of estimating the parameters of the prior distribution (i.e. elicitation still persists. Different methods are devised to serve the purpose. In this study an attempt is made to use Minimum Chi-square (hence forth MCS for the elicitation purpose. Though it is a classical estimation technique, but is used here for the election purpose. The entire elicitation procedure is illustrated through a numerical data set.

  12. Posterior Tibial Tendon Dysfunction

    Science.gov (United States)

    .org Posterior Tibial Tendon Dysfunction Page ( 1 ) Posterior tibial tendon dysfunction is one of the most common problems of the foot and ankle. It occurs when the posterior tibial tendon becomes inflamed or torn. As a result, the ...

  13. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  14. Influence of the calcaneus shape on the risk of posterior heel ulcer using 3D patient-specific biomechanical modeling.

    Science.gov (United States)

    Luboz, V; Perrier, A; Bucki, M; Diot, B; Cannard, F; Vuillerme, N; Payan, Y

    2015-02-01

    Most posterior heel ulcers are the consequence of inactivity and prolonged time lying down on the back. They appear when pressures applied on the heel create high internal strains and the soft tissues are compressed by the calcaneus. It is therefore important to monitor those strains to prevent heel pressure ulcers. Using a biomechanical lower leg model, we propose to estimate the influence of the patient-specific calcaneus shape on the strains within the foot and to determine if the risk of pressure ulceration is related to the variability of this shape. The biomechanical model is discretized using a 3D Finite Element mesh representing the soft tissues, separated into four domains implementing Neo Hookean materials with different elasticities: skin, fat, Achilles' tendon, and muscles. Bones are modelled as rigid bodies attached to the tissues. Simulations show that the shape of the calcaneus has an influence on the formation of pressure ulcers with a mean variation of the maximum strain over 6.0 percentage points over 18 distinct morphologies. Furthermore, the models confirm the influence of the cushion on which the leg is resting: a softer cushion leading to lower strains, it has less chances of creating a pressure ulcer. The methodology used for patient-specific strain estimation could be used for the prevention of heel ulcer when coupled with a pressure sensor.

  15. The contribution of the lateral posterior and anteroventral thalamic nuclei on spontaneous recurrent seizures in the pilocarpine model of epilepsy

    Directory of Open Access Journals (Sweden)

    Scorza Fulvio Alexandre

    2002-01-01

    Full Text Available The pilocarpine model of epilepsy in rats is characterised by the occurrence of spontaneous seizures (SRSs during the chronic period that recur 2-3 times per week during the whole animal life. In a previous study on brain metabolism during the chronic period of the pilocarpine model it was possible to observe that, among several brain structures, the lateral posterior thalamic nuclei (LP showed a strikingly increased metabolism. Some evidences suggest that the LP can participate in an inhibitory control system involved in the propagation of the seizures. The aim of the present study was to verify the role of LP in the expression and frequency of spontaneous seizures observed in the pilocarpine model. Ten adult male rats presenting SRSs were monitored for behavioural events by video system one month before and one month after LP ibotenic acid lesion. Another group of chronic epileptic rats (n=10 had the anteroventral thalamic nuclei (AV lesioned by ibotenic acid. After the surgical procedure, the animals were sacrified and the brains were processed for histological analysis by the Nissl method. The LP group seizure frequency was 3.1±1.9 before ibotenic acid injection and showed an increase (16.3±7.2 per week after LP lesion. No changes in SRSs frequency were observed in the AV group after ibotenic lesion in these nuclei. These results seem to suggest that LP play a role in the seizure circuitry inhibiting the expression of spontaneous seizures in the pilocarpine model.

  16. Change in the Pathologic Supraspinatus: A Three-Dimensional Model of Fiber Bundle Architecture within Anterior and Posterior Regions

    Directory of Open Access Journals (Sweden)

    Soo Y. Kim

    2015-01-01

    Full Text Available Supraspinatus tendon tears are common and lead to changes in the muscle architecture. To date, these changes have not been investigated for the distinct regions and parts of the pathologic supraspinatus. The purpose of this study was to create a novel three-dimensional (3D model of the muscle architecture throughout the supraspinatus and to compare the architecture between muscle regions and parts in relation to tear severity. Twelve cadaveric specimens with varying degrees of tendon tears were used. Three-dimensional coordinates of fiber bundles were collected in situ using serial dissection and digitization. Data were reconstructed and modeled in 3D using Maya. Fiber bundle length (FBL and pennation angle (PA were computed and analyzed. FBL was significantly shorter in specimens with large retracted tears compared to smaller tears, with the deeper fibers being significantly shorter than other parts in the anterior region. PA was significantly greater in specimens with large retracted tears, with the superficial fibers often demonstrating the largest PA. The posterior region was absent in two specimens with extensive tears. Architectural changes associated with tendon tears affect the regions and varying depths of supraspinatus differently. The results provide important insights on residual function of the pathologic muscle, and the 3D model includes detailed data that can be used in future modeling studies.

  17. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  18. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-05-01

    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  19. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  20. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  1. A New Approach for Parameter Optimization in Land Surface Model

    Institute of Scientific and Technical Information of China (English)

    LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin

    2011-01-01

    In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.

  2. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    . Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...

  3. Estimating winter wheat phenological parameters: Implications for crop modeling

    Science.gov (United States)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  4. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  5. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  6. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  7. Dynamic Modeling and Parameter Identification of Power Systems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    @@ The generator, the excitation system, the steam turbine and speed governor, and the load are the so called four key models of power systems. Mathematical modeling and parameter identification for the four key models are of great importance as the basis for designing, operating, and analyzing power systems.

  8. Dynamic Load Model using PSO-Based Parameter Estimation

    Science.gov (United States)

    Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu

    This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.

  9. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  10. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan D.; Stieglitz, Marc

    2015-06-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.

  11. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  12. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  13. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  14. Transformations among CE–CVM model parameters for multicomponent systems

    Indian Academy of Sciences (India)

    B Nageswara Sarma; Shrikant Lele

    2005-06-01

    In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.

  15. SPOTting Model Parameters Using a Ready-Made Python Package.

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  16. SPOTting Model Parameters Using a Ready-Made Python Package.

    Directory of Open Access Journals (Sweden)

    Tobias Houska

    Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  17. Numerical modeling of piezoelectric transducers using physical parameters.

    Science.gov (United States)

    Cappon, Hans; Keesman, Karel J

    2012-05-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.

  18. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  19. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    Science.gov (United States)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  20. Minor hysteresis loops model based on exponential parameters scaling of the modified Jiles-Atherton model

    Energy Technology Data Exchange (ETDEWEB)

    Hamimid, M., E-mail: Hamimid_mourad@hotmail.com [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Mimoune, S.M., E-mail: s.m.mimoune@mselab.org [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Feliachi, M., E-mail: mouloud.feliachi@univ-nantes.fr [IREENA-IUT, CRTT, 37 Boulevard de l' Universite, BP 406, 44602 Saint Nazaire Cedex (France)

    2012-07-01

    In this present work, the minor hysteresis loops model based on parameters scaling of the modified Jiles-Atherton model is evaluated by using judicious expressions. These expressions give the minor hysteresis loops parameters as a function of the major hysteresis loop ones. They have exponential form and are obtained by parameters identification using the stochastic optimization method 'simulated annealing'. The main parameters influencing the data fitting are three parameters, the pinning parameter k, the mean filed parameter {alpha} and the parameter which characterizes the shape of anhysteretic magnetization curve a. To validate this model, calculated minor hysteresis loops are compared with measured ones and good agreements are obtained.

  1. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  2. MODELING OF FUEL SPRAY CHARACTERISTICS AND DIESEL COMBUSTION CHAMBER PARAMETERS

    Directory of Open Access Journals (Sweden)

    G. M. Kukharonak

    2011-01-01

    Full Text Available The computer model for coordination of fuel spray characteristics with diesel combustion chamber parameters has been created in the paper.  The model allows to observe fuel sprays  develоpment in diesel cylinder at any moment of injection, to calculate characteristics of fuel sprays with due account of a shape and dimensions of a combustion chamber, timely to change fuel injection characteristics and supercharging parameters, shape and dimensions of a combustion chamber. Moreover the computer model permits to determine parameters of holes in an injector nozzle that provides the required fuel sprays characteristics at the stage of designing a diesel engine. Combustion chamber parameters for 4ЧН11/12.5 diesel engine have been determined in the paper.

  3. Mathematically Modeling Parameters Influencing Surface Roughness in CNC Milling

    Directory of Open Access Journals (Sweden)

    Engin Nas

    2012-01-01

    Full Text Available In this study, steel AISI 1050 is subjected to process of face milling in CNC milling machine and such parameters as cutting speed, feed rate, cutting tip, depth of cut influencing the surface roughness are investigated experimentally. Four different experiments are conducted by creating different combinations for parameters. In conducted experiments, cutting tools, which are coated by PVD method used in forcing steel and spheroidal graphite cast iron are used. Surface roughness values, which are obtained by using specified parameters with cutting tools, are measured and correlation between measured surface roughness values and parameters is modeled mathematically by using curve fitting algorithm. Mathematical models are evaluated according to coefficients of determination (R2 and the most ideal one is suggested for theoretical works. Mathematical models, which are proposed for each experiment, are estipulated.

  4. Regionalization parameters of conceptual rainfall-runoff model

    Science.gov (United States)

    Osuch, M.

    2003-04-01

    Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.

  5. Posterior Probability Modeling and Image Classification for Archaeological Site Prospection: Building a Survey Efficacy Model for Identifying Neolithic Felsite Workshops in the Shetland Islands

    Directory of Open Access Journals (Sweden)

    William P. Megarry

    2016-06-01

    Full Text Available The application of custom classification techniques and posterior probability modeling (PPM using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.

  6. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....

  7. MODELING PARAMETERS OF ARC OF ELECTRIC ARC FURNACE

    Directory of Open Access Journals (Sweden)

    R.N. Khrestin

    2015-08-01

    Full Text Available Purpose. The aim is to build a mathematical model of the electric arc of arc furnace (EAF. The model should clearly show the relationship between the main parameters of the arc. These parameters determine the properties of the arc and the possibility of optimization of melting mode. Methodology. We have built a fairly simple model of the arc, which satisfies the above requirements. The model is designed for the analysis of electromagnetic processes arc of varying length. We have compared the results obtained when testing the model with the results obtained on actual furnaces. Results. During melting in real chipboard under the influence of changes in temperature changes its properties arc plasma. The proposed model takes into account these changes. Adjusting the length of the arc is the main way to regulate the mode of smelting chipboard. The arc length is controlled by the movement of the drive electrode. The model reflects the dynamic changes in the parameters of the arc when changing her length. We got the dynamic current-voltage characteristics (CVC of the arc for the different stages of melting. We got the arc voltage waveform and identified criteria by which possible identified stage of smelting. Originality. In contrast to the previously known models, this model clearly shows the relationship between the main parameters of the arc EAF: arc voltage Ud, amperage arc id and length arc d. Comparison of the simulation results and experimental data obtained from real particleboard showed the adequacy of the constructed model. It was found that character of change of magnitude Md, helps determine the stage of melting. Practical value. It turned out that the model can be used to simulate smelting in EAF any capacity. Thus, when designing the system of control mechanism for moving the electrode, the model takes into account changes in the parameters of the arc and it can significantly reduce electrode material consumption and energy consumption

  8. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  9. Posterior ankle impingement.

    Science.gov (United States)

    Giannini, Sandro; Buda, Roberto; Mosca, Massimiliano; Parma, Alessandro; Di Caprio, Francesco

    2013-03-01

    Posterior ankle impingement is a common cause of chronic ankle pain and results from compression of bony or soft tissue structures during ankle plantar flexion. Bony impingement is most commonly related to an os trigonum or prominent trigonal process. Posteromedial soft tissue impingement generally arises from an inversion injury, with compression of the posterior tibiotalar ligament between the medial malleolus and talus. Posterolateral soft tissue impingement is caused by an accessory ligament, the posterior intermalleolar ligament, which spans the posterior ankle between the posterior tibiofibular and posterior talofibular ligaments. Finally, anomalous muscles have also been described as a cause of posterior impingement.

  10. A mouse model of ocular blast injury that induces closed globe anterior and posterior pole damage

    Science.gov (United States)

    Hines-Beard, Jessica; Marchetta, Jeffrey; Gordon, Sarah; Chaum, Edward; Geisert, Eldon E.; Rex, Tonia S.

    2012-01-01

    We developed and characterized a mouse model of primary ocular blast injury. The device consists of: a pressurized air tank attached to a regulated paintball gun with a machined barrel; a chamber that protects the mouse from direct injury and recoil, while exposing the eye; and a secure platform that enables fine, controlled movement of the chamber in relation to the barrel. Expected pressures were calculated and the optimal pressure transducer, based on the predicted pressures, was positioned to measure output pressures at the location where the mouse eye would be placed. Mice were exposed to one of three blast pressures (23.6, 26.4, or 30.4psi). Gross pathology, intraocular pressure, optical coherence tomography, and visual acuity were assessed 0, 3, 7, 14, and 28 days after exposure. Contralateral eyes and non-blast exposed mice were used as controls. We detected increased damage with increased pressures and a shift in the damage profile over time. Gross pathology included corneal edema, corneal abrasions, and optic nerve avulsion. Retinal damage was detected by optical coherence tomography and a deficit in visual acuity was detected by optokinetics. Our findings are comparable to those identified in Veterans of the recent wars with closed eye injuries as a result of blast exposure. In summary, this is a relatively simple system that creates injuries with features similar to those seen in patients with ocular blast trauma. This is an important new model for testing the short-term and long-term spectrum of closed globe blast injuries and potential therapeutic interventions. PMID:22504073

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  12. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  13. Construction of constant-Q viscoelastic model with three parameters

    Institute of Scientific and Technical Information of China (English)

    SUN Cheng-yu; YIN Xing-yao

    2007-01-01

    The popularly used viscoelastic models have some shortcomings in describing relationship between quality factor (Q) and frequency, which is not consistent with the observation data. Based on the theory of viscoelasticity, a new approach to construct constant-Q viscoelastic model in given frequency band with three parameters is developed. The designed model describes the frequency-independence feature of quality factor very well, and the effect of viscoelasticity on seismic wave field can be studied relatively accurate in theory with this model. Furthermore, the number of required parameters in this model has been reduced fewer than that of other constant-Q models, this can simplify the solution of the viscoelastic problems to some extent. At last, the accuracy and application range have been analyzed through numerical tests. The effect of viscoelasticity on wave propagation has been briefly illustrated through the change of frequency spectra and waveform in several different viscoelastic models.

  14. Global-scale regionalization of hydrologic model parameters

    Science.gov (United States)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  15. Bayesian parameter estimation for nonlinear modelling of biological pathways

    Directory of Open Access Journals (Sweden)

    Ghasemi Omid

    2011-12-01

    Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly

  16. Mirror symmetry for two-parameter models, 1

    CERN Document Server

    Candelas, Philip; Font, A; Katz, S; Morrison, Douglas Robert Ogston; Candelas, Philip; Ossa, Xenia de la; Font, Anamaria; Katz, Sheldon; Morrison, David R.

    1994-01-01

    We study, by means of mirror symmetry, the quantum geometry of the K\\"ahler-class parameters of a number of Calabi-Yau manifolds that have $b_{11}=2$. Our main interest lies in the structure of the moduli space and in the loci corresponding to singular models. This structure is considerably richer when there are two parameters than in the various one-parameter models that have been studied hitherto. We describe the intrinsic structure of the point in the (compactification of the) moduli space that corresponds to the large complex structure or classical limit. The instanton expansions are of interest owing to the fact that some of the instantons belong to families with continuous parameters. We compute the Yukawa couplings and their expansions in terms of instantons of genus zero. By making use of recent results of Bershadsky et al. we compute also the instanton numbers for instantons of genus one. For particular values of the parameters the models become birational to certain models with one parameter. The co...

  17. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  18. Muscle parameters for musculoskeletal modelling of the human neck

    NARCIS (Netherlands)

    Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.

    2011-01-01

    Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment

  19. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...

  20. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    Science.gov (United States)

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  1. Muscle parameters for musculoskeletal modelling of the human neck

    NARCIS (Netherlands)

    Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.

    2011-01-01

    Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment

  2. Geometry parameters for musculoskeletal modelling of the shoulder system

    NARCIS (Netherlands)

    Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H

    1992-01-01

    A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of geomet

  3. Precise correction to parameter ρ in the littlest Higgs model

    Institute of Scientific and Technical Information of China (English)

    Farshid Tabbak; F.Farnoudi

    2008-01-01

    In this paper tree-level violation of weak isospin parameter,ρ in the flame of the littlest Higgs model is studied.The potentially large deviation from the standard model prediction for the ρ in terms of the littlest Higgs model parameters is calculated.The maximum value for ρ for f = 1 TeV,c = 0.05,c'= 0.05and v'= 1.5 GeV is ρ = 1.2973 which means a large enhancement than the SM.

  4. Comparative Analysis of Visco-elastic Models with Variable Parameters

    Directory of Open Access Journals (Sweden)

    Silviu Nastac

    2010-01-01

    Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.

  5. Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction

    Science.gov (United States)

    Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad

    2010-05-01

    Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters

  6. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  7. Deep brain stimulation of the posterior hypothalamus activates the histaminergic system to exert antiepileptic effect in rat pentylenetetrazol model.

    Science.gov (United States)

    Nishida, Namiko; Huang, Zhi-Li; Mikuni, Nobuhiro; Miura, Yoshiki; Urade, Yoshihiro; Hashimoto, Nobuo

    2007-05-01

    Deep brain stimulation (DBS) is a promising therapy for intractable epilepsy, yet the optimum target and underlying mechanism remain controversial. We used the rat pentylenetetrazol (PTZ) seizure model to evaluate the effectiveness of DBS to three targets: two known to be critical for arousal, the histaminergic tuberomammillary nucleus (TMN) and the orexin/hypocretinergic perifornical area (PFN), and the anterior thalamic nuclei (ATH) now in clinical trial. TMN stimulation provided the strong protection against the seizure, and PFN stimulation elicited a moderate effect yet accompanying abnormal behavior in 25% subjects, while ATH stimulation aggravated the seizure. Power density analysis showed EEG desynchronization after DBS on TMN and PFN, while DBS on ATH caused no effect with the same stimulation intensity. EEG desynchronization after TMN stimulation was inhibited in a dose-dependent manner by pyrilamine, a histamine H(1) receptor selective antagonist, while the effect of PFN stimulation was inhibited even at a low dose. In parallel, in vivo microdialysis revealed a prominent increase of histamine release in the frontal cortex after TMN stimulation, a moderate level with PFN and none with ATH. Furthermore, antiepileptic effect of DBS to TMN was also blocked by an H(1) receptor antagonist. This study clearly indicates that EEG desynchronization and the activation of the histaminergic system contributed to the antiepileptic effects caused by DBS to the posterior hypothalamus.

  8. Condition Parameter Modeling for Anomaly Detection in Wind Turbines

    Directory of Open Access Journals (Sweden)

    Yonglong Yan

    2014-05-01

    Full Text Available Data collected from the supervisory control and data acquisition (SCADA system, used widely in wind farms to obtain operational and condition information about wind turbines (WTs, is of important significance for anomaly detection in wind turbines. The paper presents a novel model for wind turbine anomaly detection mainly based on SCADA data and a back-propagation neural network (BPNN for automatic selection of the condition parameters. The SCADA data sets are determined through analysis of the cumulative probability distribution of wind speed and the relationship between output power and wind speed. The automatic BPNN-based parameter selection is for reduction of redundant parameters for anomaly detection in wind turbines. Through investigation of cases of WT faults, the validity of the automatic parameter selection-based model for WT anomaly detection is verified.

  9. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Jieming Ma

    2013-01-01

    Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.

  10. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  11. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    Science.gov (United States)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  12. Estimation of the parameters of ETAS models by Simulated Annealing

    OpenAIRE

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...

  13. CADLIVE optimizer: web-based parameter estimation for dynamic models

    Directory of Open Access Journals (Sweden)

    Inoue Kentaro

    2012-08-01

    Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.

  14. Reference physiological parameters for pharmacodynamic modeling of liver cancer

    Energy Technology Data Exchange (ETDEWEB)

    Travis, C.C.; Arms, A.D.

    1988-01-01

    This document presents a compilation of measured values for physiological parameters used in pharamacodynamic modeling of liver cancer. The physiological parameters include body weight, liver weight, the liver weight/body weight ratio, and number of hepatocytes. Reference values for use in risk assessment are given for each of the physiological parameters based on analyses of valid measurements taken from the literature and other reliable sources. The proposed reference values for rodents include sex-specific measurements for the B6C3F{sub 1}, mice and Fishcer 344/N, Sprague-Dawley, and Wistar rats. Reference values are also provided for humans. 102 refs., 65 tabs.

  15. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  16. X-Parameter Based Modelling of Polar Modulated Power Amplifiers

    DEFF Research Database (Denmark)

    Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel

    2013-01-01

    X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...

  17. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  18. Modelling of Water Turbidity Parameters in a Water Treatment Plant

    Directory of Open Access Journals (Sweden)

    A. S. KOVO

    2005-01-01

    Full Text Available The high cost of chemical analysis of water has necessitated various researches into finding alternative method of determining portable water quality. This paper is aimed at modelling the turbidity value as a water quality parameter. Mathematical models for turbidity removal were developed based on the relationships between water turbidity and other water criteria. Results showed that the turbidity of water is the cumulative effect of the individual parameters/factors affecting the system. A model equation for the evaluation and prediction of a clarifier’s performance was developed:Model: T = T0(-1.36729 + 0.037101∙10λpH + 0.048928t + 0.00741387∙alkThe developed model will aid the predictive assessment of water treatment plant performance. The limitations of the models are as a result of insufficient variable considered during the conceptualization.

  19. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  20. Shape parameter estimate for a glottal model without time position

    OpenAIRE

    Degottex, Gilles; Roebel, Axel; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...

  1. Light-Front Spin-1 Model: Parameters Dependence

    CERN Document Server

    Mello, Clayton S; de Melo, J P B C; Frederico, T

    2015-01-01

    We study the structure of the $\\rho$-meson within a light-front model with constituent quark degrees of freedom. We calculate electroweak static observables: magnetic and quadrupole moments, decay constant and charge radius. The prescription used to compute the electroweak quantities is free of zero modes, which makes the calculation implicitly covariant. We compare the results of our model with other ones found in the literature. Our model parameters give a decay constant close to the experimental one.

  2. Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold

    CERN Document Server

    Pradhan, A; Singh, C B

    2006-01-01

    FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.

  3. Identification of slow molecular order parameters for Markov model construction

    CERN Document Server

    Perez-Hernandez, Guillermo; Giorgino, Toni; de Fabritiis, Gianni; Noé, Frank

    2013-01-01

    A goal in the kinetic characterization of a macromolecular system is the description of its slow relaxation processes, involving (i) identification of the structural changes involved in these processes, and (ii) estimation of the rates or timescales at which these slow processes occur. Most of the approaches to this task, including Markov models, Master-equation models, and kinetic network models, start by discretizing the high-dimensional state space and then characterize relaxation processes in terms of the eigenvectors and eigenvalues of a discrete transition matrix. The practical success of such an approach depends very much on the ability to finely discretize the slow order parameters. How can this task be achieved in a high-dimensional configuration space without relying on subjective guesses of the slow order parameters? In this paper, we use the variational principle of conformation dynamics to derive an optimal way of identifying the "slow subspace" of a large set of prior order parameters - either g...

  4. Solar Model Parameters and Direct Measurements of Solar Neutrino Fluxes

    CERN Document Server

    Bandyopadhyay, A; Goswami, S; Petcov, S T; Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati

    2006-01-01

    We explore a novel possibility of determining the solar model parameters, which serve as input in the calculations of the solar neutrino fluxes, by exploiting the data from direct measurements of the fluxes. More specifically, we use the rather precise value of the $^8B$ neutrino flux, $\\phi_B$ obtained from the global analysis of the solar neutrino and KamLAND data, to derive constraints on each of the solar model parameters on which $\\phi_B$ depends. We also use more precise values of $^7Be$ and $pp$ fluxes as can be obtained from future prospective data and discuss whether such measurements can help in reducing the uncertainties of one or more input parameters of the Standard Solar Model.

  5. IP-Sat: Impact-Parameter dependent Saturation model; revised

    CERN Document Server

    Rezaeian, Amir H; Van de Klundert, Merijn; Venugopalan, Raju

    2013-01-01

    In this talk, we present a global analysis of available small-x data on inclusive DIS and exclusive diffractive processes, including the latest data from the combined HERA analysis on reduced cross sections within the Impact-Parameter dependent Saturation (IP-Sat) Model. The impact-parameter dependence of dipole amplitude is crucial in order to have a unified description of both inclusive and exclusive diffractive processes. With the parameters of model fixed via a fit to the high-precision reduced cross-section, we compare model predictions to data for the structure functions, the longitudinal structure function, the charm structure function, exclusive vector mesons production and Deeply Virtual Compton Scattering (DVCS). Excellent agreement is obtained for the processes considered at small x in a wide range of Q^2.

  6. QCD-inspired determination of NJL model parameters

    CERN Document Server

    Springer, Paul; Rechenberger, Stefan; Rennecke, Fabian

    2016-01-01

    The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.

  7. SPOTting model parameters using a ready-made Python package

    Science.gov (United States)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  8. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  9. Modelling of intermittent microwave convective drying: parameter sensitivity

    Directory of Open Access Journals (Sweden)

    Zhang Zhijun

    2017-06-01

    Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  10. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  11. Modelling of intermittent microwave convective drying: parameter sensitivity

    Science.gov (United States)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  12. Comparing spatial and temporal transferability of hydrological model parameters

    Science.gov (United States)

    Patil, Sopan; Stieglitz, Marc

    2015-04-01

    Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal

  13. Estimation of the parameters of ETAS models by Simulated Annealing

    Science.gov (United States)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  14. J-A Hysteresis Model Parameters Estimation using GA

    Directory of Open Access Journals (Sweden)

    Bogomir Zidaric

    2005-01-01

    Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.

  15. A new estimate of the parameters in linear mixed models

    Institute of Scientific and Technical Information of China (English)

    王松桂; 尹素菊

    2002-01-01

    In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.

  16. Models wagging the dog: are circuits constructed with disparate parameters?

    Science.gov (United States)

    Nowotny, Thomas; Szücs, Attila; Levi, Rafael; Selverston, Allen I

    2007-08-01

    In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.

  17. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  18. Model Validation for Shipboard Power Cables Using Scattering Parameters%Model Validation for Shipboard Power Cables Using Scattering Parameters

    Institute of Scientific and Technical Information of China (English)

    Lukas Graber; Diomar Infante; Michael Steurer; William W. Brey

    2011-01-01

    Careful analysis of transients in shipboard power systems is important to achieve long life times of the com ponents in future all-electric ships. In order to accomplish results with high accuracy, it is recommended to validate cable models as they have significant influence on the amplitude and frequency spectrum of voltage transients. The authors propose comparison of model and measurement using scattering parameters. They can be easily obtained from measurement and simulation and deliver broadband information about the accuracy of the model. The measurement can be performed using a vector network analyzer. The process to extract scattering parameters from simulation models is explained in detail. Three different simulation models of a 5 kV XLPE power cable have been validated. The chosen approach delivers an efficient tool to quickly estimate the quality of a model.

  19. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  20. Considerations for parameter optimization and sensitivity in climate models.

    Science.gov (United States)

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.

  1. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...

  2. Iterative integral parameter identification of a respiratory mechanics model

    Directory of Open Access Journals (Sweden)

    Schranz Christoph

    2012-07-01

    Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  3. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    Science.gov (United States)

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  4. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    Science.gov (United States)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  5. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  6. Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes

    CERN Document Server

    Yang, Xin-She; Luan, Tao; Koziel, Slawomir

    2014-01-01

    Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.

  7. The influences of model parameters on the characteristics of memristors

    Institute of Scientific and Technical Information of China (English)

    Zhou Jing; Huang Da

    2012-01-01

    As the fourth passive circuit component,a memristor is a nonlinear resistor that can "remember" the amount of charge passing through it.The characteristic of "remembering" the charge and non-volatility makes memristors great potential candidates in many fields.Nowadays,only a few groups have the ability to fabricate memristors,and most researchers study them by theoretic analysis and simulation.In this paper,we first analyse the theoretical base and characteristics of memristors,then use a simulation program with integrated circuit emphasis as our tool to simulate the theoretical model of memristors and change the parameters in the model to see the influence of each parameter on the characteristics.Our work supplies researchers engaged in memristor-based circuits with advice on how to choose the proper parameters.

  8. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  9. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    Science.gov (United States)

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  10. Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Dan; LI Chengrong; WANG Zhongdong

    2013-01-01

    Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ~ 1 000 and a censoring rate of 90%,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90% confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90% confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.

  11. Calculation of Thermodynamic Parameters for Freundlich and Temkin Isotherm Models

    Institute of Scientific and Technical Information of China (English)

    ZHANGZENGQIANG; ZHANGYIPING; 等

    1999-01-01

    Derivation of the Freundlich and Temkin isotherm models from the kinetic adsorption/desorption equations was carried out to calculate their thermodynamic equilibrium constants.The calculation formulase of three thermodynamic parameters,the standard molar Gibbs free energy change,the standard molar enthalpy change and the standard molar entropy change,of isothermal adsorption processes for Freundlich and Temkin isotherm models were deduced according to the relationship between the thermodynamic equilibrium constants and the temperature.

  12. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  13. Parabolic problems with parameters arising in evolution model for phytromediation

    Science.gov (United States)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  14. Lumped-parameter Model of a Bucket Foundation

    DEFF Research Database (Denmark)

    Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten

    2009-01-01

    As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...

  15. Improved parameter estimation for hydrological models using weighted object functions

    NARCIS (Netherlands)

    Stein, A.; Zaadnoordijk, W.J.

    1999-01-01

    This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi

  16. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  17. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  18. Modeling and simulation of HTS cables for scattering parameter analysis

    Science.gov (United States)

    Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June

    2016-11-01

    Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.

  19. Evaluation of some infiltration models and hydraulic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.

    2010-07-01

    The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.

  20. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  1. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Science.gov (United States)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-05-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  2. Estimating model parameters in nonautonomous chaotic systems using synchronization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2007-05-07

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.

  3. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  4. Multiscale Parameter Regionalization for consistent global water resources modelling

    Science.gov (United States)

    Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.

    2017-04-01

    Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other

  5. Posteriorly Directed Shear Loads and Disc Degeneration Affect the Torsional Stiffness of Spinal Motion Segments A Biomechanical Modeling Study

    NARCIS (Netherlands)

    Homminga, Jasper; Lehr, Anne M.; Meijer, Gerdine J. M.; Janssen, Michiel M. A.; Schlosser, Tom P. C.; Verkerke, Gijsbertus J.; Castelein, Rene M.

    2013-01-01

    Study Design. Finite element study. Objective. To analyze the effects of posterior shear loads, disc degeneration, and the combination of both on spinal torsion stiffness. Summary of Background Data. Scoliosis is a 3-dimensional deformity of the spine that presents itself mainly in adolescent girls

  6. Posteriorly directed shear loads and disc degeneration affect the torsional stiffness of spinal motion segments; a biomechanical modeling study

    NARCIS (Netherlands)

    Homminga, J.J.; Lehr, A.M.; Meijer, G.J.M.; Janssen, M.M.A.; Schlösser, T.P.C.; Verkerke, G.J.; Castelein, R.M.

    2013-01-01

    Objective. To analyze the effects of posterior shear loads, disc degeneration, and the combination of both on spinal torsion stiffness. Summary of Background Data. Scoliosis is a 3-dimensional deformity of the spine that presents itself mainly in adolescent girls and elderly patients. Our concept o

  7. Reduced parameter model on trajectory tracking data with applications

    Institute of Scientific and Technical Information of China (English)

    王正明; 朱炬波

    1999-01-01

    The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.

  8. Parameter Estimation of the Extended Vasiček Model

    OpenAIRE

    Rujivan, Sanae

    2010-01-01

    In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...

  9. Prediction of mortality rates using a model with stochastic parameters

    Science.gov (United States)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  10. Probabilistic Constraint Programming for Parameters Optimisation of Generative Models

    CERN Document Server

    Zanin, Massimiliano; Sousa, Pedro A C; Cruz, Jorge

    2015-01-01

    Complex networks theory has commonly been used for modelling and understanding the interactions taking place between the elements composing complex systems. More recently, the use of generative models has gained momentum, as they allow identifying which forces and mechanisms are responsible for the appearance of given structural properties. In spite of this interest, several problems remain open, one of the most important being the design of robust mechanisms for finding the optimal parameters of a generative model, given a set of real networks. In this contribution, we address this problem by means of Probabilistic Constraint Programming. By using as an example the reconstruction of networks representing brain dynamics, we show how this approach is superior to other solutions, in that it allows a better characterisation of the parameters space, while requiring a significantly lower computational cost.

  11. Mark-recapture models with parameters constant in time.

    Science.gov (United States)

    Jolly, G M

    1982-06-01

    The Jolly-Seber method, which allows for both death and immigration, is easy to apply but often requires a larger number of parameters to be estimated tha would otherwise be necessary. If (i) survival rate, phi, or (ii) probability of capture, p, or (iii) both phi and p can be assumed constant over the experimental period, models with a reduced number of parameters are desirable. In the present paper, maximum likelihood (ML) solutions for these three situations are derived from the general ML equations of Jolly [1979, in Sampling Biological Populations, R. M. Cormack, G. P. Patil and D. S. Robson (eds), 277-282]. A test is proposed for heterogeneity arising from a breakdown of assumptions in the general Jolly-Seber model. Tests for constancy of phi and p are provided. An example is given, in which these models are fitted to data from a local butterfly population.

  12. Enhancing debris flow modeling parameters integrating Bayesian networks

    Science.gov (United States)

    Graf, C.; Stoffel, M.; Grêt-Regamey, A.

    2009-04-01

    Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk

  13. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  14. Parameter Identifiability of Ship Manoeuvring Modeling Using System Identification

    Directory of Open Access Journals (Sweden)

    Weilin Luo

    2016-01-01

    Full Text Available To improve the feasibility of system identification in the prediction of ship manoeuvrability, several measures are presented to deal with the parameter identifiability in the parametric modeling of ship manoeuvring motion based on system identification. Drift of nonlinear hydrodynamic coefficients is explained from the point of view of regression analysis. To diminish the multicollinearity in a complicated manoeuvring model, difference method and additional signal method are employed to reconstruct the samples. Moreover, the structure of manoeuvring model is simplified based on correlation analysis. Manoeuvring simulation is performed to demonstrate the validity of the measures proposed.

  15. Stress Distribution on Short Implants at Maxillary Posterior Alveolar Bone Model With Different Bone-to-Implant Contact Ratio: Finite Element Analysis.

    Science.gov (United States)

    Yazicioglu, Duygu; Bayram, Burak; Oguz, Yener; Cinar, Duygu; Uckan, Sina

    2016-02-01

    The aim of this study was to evaluate the stress distribution of the short dental implants and bone-to-implant contact ratios in the posterior maxilla using 3-dimensional (3D) finite element models. Two different 3D maxillary posterior bone segments were modeled. Group 1 was composed of a bone segment consisting of cortical bone and type IV cancellous bone with 100% bone-to-implant contact. Group 2 was composed of a bone segment consisting of cortical bone and type IV cancellous bone including spherical bone design and homogenous tubular hollow spaced structures with 30% spherical porosities and 70% bone-to-implant contact ratio. Four-millimeter-diameter and 5-mm-height dental implants were assumed to be osseointegrated and placed at the center of the segments. Lateral occlusal bite force (300 N) was applied at a 25° inclination to the implants long axis. The maximum von Mises stresses in cortical and cancellous bones and implant-abutment complex were calculated. The von Mises stress values on the implants and the cancellous bone around the implants of the 70% bone-to-implant contact group were almost 3 times higher compared with the values of the 100% bone-to-implant contact group. For clinical reality, use of the 70% model for finite element analysis simulation of the posterior maxilla region better represents real alveolar bone and the increased stress and strain distributions evaluated on the cortical and cancellous bone around the dental implants.

  16. Robust linear parameter varying induction motor control with polytopic models

    Directory of Open Access Journals (Sweden)

    Dalila Khamari

    2013-01-01

    Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.

  17. Minimum information modelling of structural systems with uncertain parameters

    Science.gov (United States)

    Hyland, D. C.

    1983-01-01

    Work is reviewed wherein the design of active structural control is formulated as the mean-square optimal control of a linear mechanical system with stochastic parameters. In practice, a complete probabilistic description of model parameters can never be provided by empirical determinations, and a suitable design approach must accept very limited a priori data on parameter statistics. In consequence, the mean-square optimization problem is formulated using a complete probability assignment which is made to be consistent with available data but maximally unconstrained otherwise through use of a maximum entropy principle. The ramifications of this approach for both robustness and large dimensionality are illustrated by consideration of the full-state feedback regulation problem.

  18. Parameter estimation in a spatial unit root autoregressive model

    CERN Document Server

    Baran, Sándor

    2011-01-01

    Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.

  19. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    Directory of Open Access Journals (Sweden)

    Xiaoying Tang

    Full Text Available This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  20. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    Science.gov (United States)

    Tang, Xiaoying; Oishi, Kenichi; Faria, Andreia V; Hillis, Argye E; Albert, Marilyn S; Mori, Susumu; Miller, Michael I

    2013-01-01

    This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  1. Escleritis posterior bilateral Bilateral posterior scleritis

    Directory of Open Access Journals (Sweden)

    A. Zurutuza

    2011-08-01

    Full Text Available La escleritis posterior es un proceso inflamatorio de la parte posterior de la esclera. Su prevalencia es muy baja y el diagnóstico puede resultar complicado por la ausencia de signos oculares externos. Es más frecuente en mujeres. Cuando aparece en pacientes jóvenes no suele tener otras patologías asociadas, pero en mayores de 55 años hasta un tercio de los casos tienen relación con alguna enfermedad sistémica, sobre todo la artritis reumatoide. El diagnóstico de esta patología puede requerir un abordaje multidisciplinar y la colaboración de oftalmólogos con neurólogos, internistas o reumatólogos. En este artículo se describe un caso de escleritis posterior bilateral idiopática.Posterior scleritis is an inflammatory process of the posterior part of the sclera. Its prevalence is very low and its diagnosis can be complicated due to the absence of external ocular signs. It is more frequent in women. In young patients it does not usually have other associated pathologies, but in those over 55 years nearly one-third of the cases have a relation with some systemic disease, above all rheumatoid arthritis. The diagnosis of this pathology can require a multidisciplinary approach and the collaboration of ophthalmologists with neurologists, internists or rheumatologists. This article describes a case of idiopathic bilateral posterior scleritis.

  2. Recursive modular modelling methodology for lumped-parameter dynamic systems.

    Science.gov (United States)

    Orsino, Renato Maia Matarazzo

    2017-08-01

    This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.

  3. [A study of coordinates transform iterative fitting method to extract bio-impedance model parameters bio-impedance model parameters].

    Science.gov (United States)

    Zhou, Liming; Yang, Yuxing; Yuan, Shiying

    2006-02-01

    A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.

  4. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  5. Propagation channel characterization, parameter estimation, and modeling for wireless communications

    CERN Document Server

    Yin, Xuefeng

    2016-01-01

    Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...

  6. Auxiliary Parameter MCMC for Exponential Random Graph Models

    Science.gov (United States)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  7. Findings of an experimental study in a rabbit model on posterior capsule opacification after implantation of hydrophobic acrylic and hydrophilic acrylic intraocular lenses

    Directory of Open Access Journals (Sweden)

    Nikolaos Trakos

    2009-01-01

    Full Text Available Nikolaos Trakos1, Elli Ioachim2, Elena Tsanou2, Miltiadis Aspiotis1, Konstantinos Psilas1, Chris Kalogeropoulos11University Eye Clinic of Ioannina, Ioannina, Greece; 2Pathology Department, University of Ioannina, Ioannina, GreecePurpose: Study on cell growth on the posterior capsule after implantation of hydrophobic acrylic (Acrysof SA 60 AT and hydrophilic acrylic (Akreos Disc intraocular lenses (IOL in a rabbit model and comparison of posterior capsule opacification (PCO.Methods: Phacoemulsification was performed in 22 rabbit eyes, and two different IOL types (Acrysof SA60 AT and Akreos Disc were implanted. These IOLs had the same optic geometry (square edged but different material and design. Central PCO (CPCO, peripheral PCO (PPCO, Sommering’s ring (SR formation, type of growth, extension of PCO, cell type, inhibition, and fibrosis were evaluated three weeks after surgery. Histological sections of each globe were prepared to document the evaluation of PCO.Results: No statistically significant difference was observed between a hydrophobic acrylic IOL and a hydrophilic acrylic IOL in relation to the CPCO, PPCO, type of growth, extension, cell type, inhibition, and fibrosis. Statistically significant difference was observed in relation to the formation of SR with Acrysof SA 60 AT group presenting more SR than Akreos Disc group.Conclusion: PCO was not influenced by the material of the IOL or the design of the haptics of the IOLs we studied.Keywords: posterior capsule opacification, intraocular lenses, rabbit model

  8. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  9. Numerical model for thermal parameters in optical materials

    Science.gov (United States)

    Sato, Yoichi; Taira, Takunori

    2016-04-01

    Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.

  10. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a

  11. The oblique S parameter in higgsless electroweak models

    CERN Document Server

    Rosell, Ignasi

    2012-01-01

    We present a one-loop calculation of the oblique S parameter within Higgsless models of electroweak symmetry breaking. We have used a general effective Lagrangian with at most two derivatives, implementing the chiral symmetry breaking SU(2)_L x SU(2)_R -> SU(2)_{L+R} with Goldstones, gauge bosons and one multiplet of vector and axial-vector resonances. The estimation is based on the short-distance constraints and the dispersive approach proposed by Peskin and Takeuchi.

  12. A statistical model of proton with no parameter

    CERN Document Server

    Zhang, Y; Zhang, Yongjun; Yang, Li-Ming

    2001-01-01

    In this text, the protons are taken as an ensemble of Fock states. Using detailed balancing principle and equal probability principle, the unpolarized parton distribution of proton is gained through Monte Carlo without any parameter. A new origin of the light flavor sea-quark asymmetry is given here beside known models as Pauli blocking, meson-cloud, chiral-field, chiral-soliton and instantons.

  13. Model of the Stochastic Vacuum and QCD Parameters

    CERN Document Server

    Ferreira, E; Ferreira, Erasmo; Pereira, Flávio

    1997-01-01

    Accounting for the two independent correlation functions of the QCD vacuum, we improve the simple and consistent description given by the model of the stochastic vacuum to the high-energy pp and pbar-p data, with a new determination of parameters of non-perturbative QCD. The increase of the hadronic radii with the energy accounts for the energy dependence of the observables.

  14. Bayesian estimation of regularization parameters for deformable surface models

    Energy Technology Data Exchange (ETDEWEB)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-02-20

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.

  15. Is flow velocity a significant parameter in flood damage modelling?

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2009-10-01

    Full Text Available Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2 m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.

  16. A robust approach for the determination of Gurson model parameters

    Directory of Open Access Journals (Sweden)

    R. Sepe

    2016-07-01

    Full Text Available Among the most promising models introduced in recent years, with which it is possible to obtain very useful results for a better understanding of the physical phenomena involved in the macroscopic mechanism of crack propagation, the one proposed by Gurson and Tvergaard links the propagation of a crack to the nucleation, growth and coalescence of micro-voids, which is likely to connect the micromechanical characteristics of the component under examination to crack initiation and propagation up to a macroscopic scale. It must be pointed out that, even if the statistical character of some of the many physical parameters involved in the said model has been put in evidence, no serious attempt has been made insofar to link the corresponding statistic to the experimental and macroscopic results, as for example crack initiation time, material toughness, residual strength of the cracked component (R-Curve, and so on. In this work, such an analysis was carried out in a twofold way: the former concerned the study of the influence exerted by each of the physical parameters on the material toughness, and the latter concerned the use of the Stochastic Design Improvement (SDI technique to perform a “robust” numerical calibration of the model evaluating the nominal values of the physical and correction parameters, which fit a particular experimental result even in the presence of their “natural” variability.

  17. The Impact of Three Factors on the Recovery of Item Parameters for the Three-Parameter Logistic Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2017-01-01

    This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…

  18. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models

    KAUST Repository

    Kalligiannaki, Evangelia

    2015-01-07

    We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.

  19. Nonlocal order parameters for the 1D Hubbard model.

    Science.gov (United States)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-07

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.

  20. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  1. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  2. Finding the effective parameter perturbations in atmospheric models: the LORENZ63 model as case study

    NARCIS (Netherlands)

    Moolenaar, H.E.; Selten, F.M.

    2004-01-01

    Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate i

  3. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    Science.gov (United States)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  4. Order-parameter model for unstable multilane traffic flow

    Science.gov (United States)

    Lubashevsky; Mahnke

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the "free flow synchronized mode jam" phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the "one-particle" distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of "fast" drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the "free flow synchronized motion" phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  5. Accelerated gravitational wave parameter estimation with reduced order modeling.

    Science.gov (United States)

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  6. Optimal vibration control of curved beams using distributed parameter models

    Science.gov (United States)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  7. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    Science.gov (United States)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  8. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  9. Modeling of state parameter and hardening function for granular materials

    Institute of Scientific and Technical Information of China (English)

    彭芳乐; 李建中

    2004-01-01

    A modified plastic strain energy as hardening state parameter for dense sand was proposed, based on the results from a series of drained plane strain tests on saturated dense Japanese Toyoura sand with precise stress and strain measurements along many stress paths. In addition, a unique hardening function between the plastic strain energy and the instantaneous stress path was also presented, which was independent of stress history. The proposed state parameter and hardening function was directly verified by the simple numerical integration method. It is shown that the proposed hardening function is independent of stress history and stress path and is appropriate to be used as the hardening rule in constitutive modeling for dense sand, and it is also capable of simulating the effects on the deformation characteristics of stress history and stress path for dense sand.

  10. Parameter Estimation of the Extended Vasiček Model

    Directory of Open Access Journals (Sweden)

    Sanae RUJIVAN

    2010-01-01

    Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.

  11. Computational fluid dynamics model of WTP clearwell: Evaluation of critical parameters influencing model performance

    Energy Technology Data Exchange (ETDEWEB)

    Ducoste, J.; Brauer, R.

    1999-07-01

    Analysis of a computational fluid dynamics (CFD) model for a water treatment plant clearwell was done. Model parameters were analyzed to determine their influence on the effluent-residence time distribution (RTD) function. The study revealed that several model parameters could have significant impact on the shape of the RTD function and consequently raise the level of uncertainty on accurate predictions of clearwell hydraulics. The study also revealed that although the modeler could select a distribution of values for some of the model parameters, most of these values can be ruled out by requiring the difference between the calculated and theoretical hydraulic retention time to within 5% of the theoretical value.

  12. Strong parameter renormalization from optimum lattice model orbitals

    Science.gov (United States)

    Brosco, Valentina; Ying, Zu-Jian; Lorenzana, José

    2017-01-01

    Which is the best single-particle basis to express a Hubbard-like lattice model? A rigorous variational answer to this question leads to equations the solution of which depends in a self-consistent manner on the lattice ground state. Contrary to naive expectations, for arbitrary small interactions, the optimized orbitals differ from the noninteracting ones, leading also to substantial changes in the model parameters as shown analytically and in an explicit numerical solution for a simple double-well one-dimensional case. At strong coupling, we obtain the direct exchange interaction with a very large renormalization with important consequences for the explanation of ferromagnetism with model Hamiltonians. Moreover, in the case of two atoms and two fermions we show that the optimization equations are closely related to reduced density-matrix functional theory, thus establishing an unsuspected correspondence between continuum and lattice approaches.

  13. Multi-parameter models of innovation diffusion on complex networks

    CERN Document Server

    McCullen, Nicholas J; Bale, Catherine S E; Foxon, Tim J; Gale, William F

    2012-01-01

    A model, applicable to a range of innovation diffusion applications with a strong peer to peer component, is developed and studied, along with methods for its investigation and analysis. A particular application is to individual households deciding whether to install an energy efficiency measure in their home. The model represents these individuals as nodes on a network, each with a variable representing their current state of adoption of the innovation. The motivation to adopt is composed of three terms, representing personal preference, an average of each individual's network neighbours' states and a system average, which is a measure of the current social trend. The adoption state of a node changes if a weighted linear combination of these factors exceeds some threshold. Numerical simulations have been carried out, computing the average uptake after a sufficient number of time-steps over many realisations at a range of model parameter values, on various network topologies, including random (Erdos-Renyi), s...

  14. Reconstructing parameters of spreading models from partial observations

    CERN Document Server

    Lokhov, Andrey Y

    2016-01-01

    Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs.

  15. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  16. Connecting Global to Local Parameters in Barred Galaxy Models

    Indian Academy of Sciences (India)

    N. D. Caranicolas

    2002-09-01

    We present connections between global and local parameters in a realistic dynamical model, describing motion in a barred galaxy. Expanding the global model in the vicinity of a stable Lagrange point, we find the potential of a two-dimensional perturbed harmonic oscillator, which describes local motion near the centre of the global model. The frequencies of oscillations and the coefficients of the perturbing terms are not arbitrary but are connected to the mass, the angular rotation velocity, the scale length and the strength of the galactic bar. The local energy is also connected to the global energy. A comparison of the properties of orbits in the global and local potential is also made.

  17. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  18. Parameter sensitivity in satellite-gravity-constrained geothermal modelling

    Science.gov (United States)

    Pastorutti, Alberto; Braitenberg, Carla

    2017-04-01

    The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.

  19. Optimization of Experimental Model Parameter Identification for Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Rosario Morello

    2013-09-01

    Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.

  20. Multiobjective Automatic Parameter Calibration of a Hydrological Model

    Directory of Open Access Journals (Sweden)

    Donghwi Jung

    2017-03-01

    Full Text Available This study proposes variable balancing approaches for the exploration (diversification and exploitation (intensification of the non-dominated sorting genetic algorithm-II (NSGA-II with simulated binary crossover (SBX and polynomial mutation (PM in the multiobjective automatic parameter calibration of a lumped hydrological model, the HYMOD model. Two objectives—minimizing the percent bias and minimizing three peak flow differences—are considered in the calibration of the six parameters of the model. The proposed balancing approaches, which migrate the focus between exploration and exploitation over generations by varying the crossover and mutation distribution indices of SBX and PM, respectively, are compared with traditional static balancing approaches (the two dices value is fixed during optimization in a benchmark hydrological calibration problem for the Leaf River (1950 km2 near Collins, Mississippi. Three performance metrics—solution quality, spacing, and convergence—are used to quantify and compare the quality of the Pareto solutions obtained by the two different balancing approaches. The variable balancing approaches that migrate the focus of exploration and exploitation differently for SBX and PM outperformed other methods.

  1. Constraints on the parameters of the Left Right Mirror Model

    CERN Document Server

    Cerón, V E; Díaz-Cruz, J L; Maya, M; Ceron, Victoria E.; Cotti, Umberto; Maya, Mario

    1998-01-01

    We study some phenomenological constraints on the parameters of a left right model with mirror fermions (LRMM) that solves the strong CP problem. In particular, we evaluate the contribution of mirror neutrinos to the invisible Z decay width (\\Gamma_Z^{inv}), and we find that the present experimental value on \\Gamma_Z^{inv}, can be used to place an upper bound on the Z-Z' mixing angle that is consistent with limits obtained previously from other low-energy observables. In this model the charged fermions that correspond to the standard model (SM) mix with its mirror counterparts. This mixing, simultaneously with the Z-Z' one, leads to modifications of the \\Gamma(Z --> f \\bar{f}) decay width. By comparing with LEP data, we obtain bounds on the standard-mirror lepton mixing angles. We also find that the bottom quark mixing parameters can be chosen to fit the experimental values of R_b, and the resulting values for the Z-Z' mixing angle do not agree with previous bounds. However, this disagreement disappears if on...

  2. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  3. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  4. Parameter optimization in differential geometry based solvation models.

    Science.gov (United States)

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  5. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...

  6. Allowed Parameter Regions for a Tree-Level Inflation Model

    Institute of Scientific and Technical Information of China (English)

    MENG Xin-He

    2001-01-01

    The early universe inflation is well known as a promising theory to explain the origin of large-scale structure of universe and to solve the early universe pressing problems. For a reasonable inflation model, the potential during inflation must be very flat, at least, in the direction of the inflaton. To construct the inflaton potential all the known related astrophysics observations should be included. For a general tree-level hybrid inflation potential, which is notdiscussed fully so far, the parameters in it are shown how to be constrained via the astrophysics data observed and to be obtained to the expected accuracy, and to be consistent with cosmology requirements.``

  7. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    T. Raita

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  8. Genetic parameters for tunisian holsteins using a test-day random regression model.

    Science.gov (United States)

    Hammami, H; Rekik, B; Soyeurt, H; Ben Gara, A; Gengler, N

    2008-05-01

    Genetic parameters of milk, fat, and protein yields were estimated in the first 3 lactations for registered Tunisian Holsteins. Data included 140,187; 97,404; and 62,221 test-day production records collected on 22,538; 15,257; and 9,722 first-, second-, and third-parity cows, respectively. Records were of cows calving from 1992 to 2004 in 96 herds. (Co)variance components were estimated by Bayesian methods and a 3-trait-3-lactation random regression model. Gibbs sampling was used to obtain posterior distributions. The model included herd x test date, age x season of calving x stage of lactation [classes of 25 days in milk (DIM)], production sector x stage of lactation (classes of 5 DIM) as fixed effects, and random regression coefficients for additive genetic, permanent environmental, and herd-year of calving effects, which were defined as modified constant, linear, and quadratic Legendre coefficients. Heritability estimates for 305-d milk, fat and protein yields were moderate (0.12 to 0.18) and in the same range of parameters estimated in management systems with low to medium production levels. Heritabilities of test-day milk and protein yields for selected DIM were higher in the middle than at the beginning or the end of lactation. Inversely, heritabilities of fat yield were high at the peripheries of lactation. Genetic correlations among 305-d yield traits ranged from 0.50 to 0.86. The largest genetic correlation was observed between the first and second lactation, potentially due to the limited expression of genetic potential of superior cows in later lactations. Results suggested a lack of adaptation under the local management and climatic conditions. Results should be useful to implement a BLUP evaluation for the Tunisian cow population; however, results also indicated that further research focused on data quality might be needed.

  9. Modelling of bio-optical parameters of open ocean waters

    Directory of Open Access Journals (Sweden)

    Vadim N. Pelevin

    2001-12-01

    Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.

  10. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...... useful in situations where prior information is available and only few observations are present. The resulting estimators in some sense have better properties than the classical estimators. The second idea is to formulate Michael Sørensens method "prediction based estimating function" for measurement...... from a posterior distribution. The sampling algorithm is constructed from a Markov chain which allows the dimension of each sample to vary, this is obtained by utilizing the Reversible jumps methology proposed by Peter Green. Each sample is constructed such that the corresponding structures...

  11. Combined Estimation of Hydrogeologic Conceptual Model, Parameter, and Scenario Uncertainty with Application to Uranium Transport at the Hanford Site 300 Area

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.

    2007-07-30

    This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow

  12. Breakdown parameter for kinetic modeling of multiscale gas flows.

    Science.gov (United States)

    Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao

    2014-06-01

    Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.

  13. Structural Breaks, Parameter Stability and Energy Demand Modeling in Nigeria

    Directory of Open Access Journals (Sweden)

    Olusegun A. Omisakin

    2012-08-01

    Full Text Available This paper extends previous studies in modeling and estimating energy demand functions for both gasoline and kerosene petroleum products for Nigeria from 1977 to 2008. In contrast to earlier studies on Nigeria and other developing countries, this study specifically tests for the possibility of structural breaks/regime shifts and parameter instability in the energy demand functions using more recent and robust techniques. In addition, the study considers an alternative model specification which primarily captures the price-income interaction effects on both gasoline and kerosene demand functions. While the conventional residual-based cointegration tests employed fail to identify any meaningful long run relationship in both functions, the Gregory-Hansen structural break cointegration approach confirms the cointegration relationships despite the breakpoints. Both functions are also found to be stable under the period studied.The elasticity estimates also follow the a priori expectation being inelastic both in the long- and short run for the two functions.

  14. A novel criterion for determination of material model parameters

    Science.gov (United States)

    Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.

    2011-05-01

    Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.

  15. Standard model parameters and the search for new physics

    Energy Technology Data Exchange (ETDEWEB)

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs.

  16. Optimization routine for identification of model parameters in soil plasticity

    Science.gov (United States)

    Mattsson, Hans; Klisinski, Marek; Axelsson, Kennet

    2001-04-01

    The paper presents an optimization routine especially developed for the identification of model parameters in soil plasticity on the basis of different soil tests. Main focus is put on the mathematical aspects and the experience from application of this optimization routine. Mathematically, for the optimization, an objective function and a search strategy are needed. Some alternative expressions for the objective function are formulated. They capture the overall soil behaviour and can be used in a simultaneous optimization against several laboratory tests. Two different search strategies, Rosenbrock's method and the Simplex method, both belonging to the category of direct search methods, are utilized in the routine. Direct search methods have generally proved to be reliable and their relative simplicity make them quite easy to program into workable codes. The Rosenbrock and simplex methods are modified to make the search strategies as efficient and user-friendly as possible for the type of optimization problem addressed here. Since these search strategies are of a heuristic nature, which makes it difficult (or even impossible) to analyse their performance in a theoretical way, representative optimization examples against both simulated experimental results as well as performed triaxial tests are presented to show the efficiency of the optimization routine. From these examples, it has been concluded that the optimization routine is able to locate a minimum with a good accuracy, fast enough to be a very useful tool for identification of model parameters in soil plasticity.

  17. Variational methods to estimate terrestrial ecosystem model parameters

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  18. A parameter model for dredge plume sediment source terms

    Science.gov (United States)

    Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc

    2017-01-01

    , which is not available in all situations. For example, to allow correct representation of overflow plume dispersion in a real-time forecasting model, a fast assessment of the near-field behaviour is needed. For this reason, a semi-analytical parameter model has been developed that reproduces the near-field sediment dispersion obtained with the CFD model in a relatively accurate way. In this paper, this so-called grey-box model is presented.

  19. Pressure pulsation in roller pumps: a validated lumped parameter model.

    Science.gov (United States)

    Moscato, Francesco; Colacino, Francesco M; Arabia, Maurizio; Danieli, Guido A

    2008-11-01

    During open-heart surgery roller pumps are often used to keep the circulation of blood through the patient body. They present numerous key features, but they suffer from several limitations: (a) they normally deliver uncontrolled pulsatile inlet and outlet pressure; (b) blood damage appears to be more than that encountered with centrifugal pumps. A lumped parameter mathematical model of a roller pump (Sarns 7000, Terumo CVS, Ann Arbor, MI, USA) was developed to dynamically simulate pressures at the pump inlet and outlet in order to clarify the uncontrolled pulsation mechanism. Inlet and outlet pressures obtained by the mathematical model have been compared with those measured in various operating conditions: different rollers' rotating speed, different tube occlusion rates, and different clamping degree at the pump inlet and outlet. Model results agree with measured pressure waveforms, whose oscillations are generated by the tube compression/release mechanism during the rollers' engaging and disengaging phases. Average Euclidean Error (AEE) was 20mmHg and 33mmHg for inlet and outlet pressure estimates, respectively. The normalized AEE never exceeded 0.16. The developed model can be exploited for designing roller pumps with improved performances aimed at reducing the undesired pressure pulsation.

  20. Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng SONG

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  1. Uniqueness, scale, and resolution issues in groundwater model parameter identification

    Directory of Open Access Journals (Sweden)

    Tian-chyi J. Yeh

    2015-07-01

    Full Text Available This paper first visits uniqueness, scale, and resolution issues in groundwater flow forward modeling problems. It then makes the point that non-unique solutions to groundwater flow inverse problems arise from a lack of information necessary to make the problems well defined. Subsequently, it presents the necessary conditions for a well-defined inverse problem. They are full specifications of (1 flux boundaries and sources/sinks, and (2 heads everywhere in the domain at at least three times (one of which is t = 0, with head change everywhere at those times must being nonzero for transient flow. Numerical experiments are presented to corroborate the fact that, once the necessary conditions are met, the inverse problem has a unique solution. We also demonstrate that measurement noise, instability, and sensitivity are issues related to solution techniques rather than the inverse problems themselves. In addition, we show that a mathematically well-defined inverse problem, based on an equivalent homogeneous or a layered conceptual model, may yield physically incorrect and scenario-dependent parameter values. These issues are attributed to inconsistency between the scale of the head observed and that implied by these models. Such issues can be reduced only if a sufficiently large number of observation wells are used in the equivalent homogeneous domain or each layer. With a large number of wells, we then show that increase in parameterization can lead to a higher-resolution depiction of heterogeneity if an appropriate inverse methodology is used. Furthermore, we illustrate that, using the same number of wells, a highly parameterized model in conjunction with hydraulic tomography can yield better characterization of the aquifer and minimize the scale and scenario-dependent problems. Lastly, benefits of the highly parameterized model and hydraulic tomography are tested according to their ability to improve predictions of aquifer responses induced by

  2. Finding model parameters: Genetic algorithms and the numerical modelling of quartz luminescence

    Energy Technology Data Exchange (ETDEWEB)

    Adamiec, Grzegorz [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland)]. E-mail: grzegorz.adamiec@polsl.pl; Bluszcz, Andrzej [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland); Bailey, Richard [Department of Geography, Royal Holloway, University of London, Egham, Surrey, TW20 0EX (United Kingdom); Garcia-Talavera, Marta [LIBRA, Centro I-D, Campus Miguel Delibes, 47011 Valladolid (Spain)

    2006-08-15

    The paper presents an application of genetic algorithms (GAs) to the problem of finding appropriate parameter values for the numerical simulation of quartz thermoluminescence (TL). We show that with the use of GAs it is possible to achieve a very good match between simulated and experimentally measured characteristics of quartz, for example the thermal activation characteristics of fired quartz. The rate equations of charge transport in the numerical model of luminescence in quartz contain a large number of parameters (trap depths, frequency factors, populations, charge capture probabilities, optical detrapping probabilities, and recombination probabilities). Given that comprehensive models consist of over 10 traps, finding model parameters proves a very difficult task. Manual parameter changes are very time consuming and allow only a limited degree of accuracy. GAs provide a semi-automatic way of finding appropriate parameters.

  3. Relevant parameters in models of cell division control

    Science.gov (United States)

    Grilli, Jacopo; Osella, Matteo; Kennard, Andrew S.; Lagomarsino, Marco Cosentino

    2017-03-01

    A recent burst of dynamic single-cell data makes it possible to characterize the stochastic dynamics of cell division control in bacteria. Different models were used to propose specific mechanisms, but the links between them are poorly explored. The lack of comparative studies makes it difficult to appreciate how well any particular mechanism is supported by the data. Here, we describe a simple and generic framework in which two common formalisms can be used interchangeably: (i) a continuous-time division process described by a hazard function and (ii) a discrete-time equation describing cell size across generations (where the unit of time is a cell cycle). In our framework, this second process is a discrete-time Langevin equation with simple physical analogues. By perturbative expansion around the mean initial size (or interdivision time), we show how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well as the constant added size mechanism recently found to capture several aspects of the cell division behavior of different bacteria. As we show by analytical estimates and numerical simulations, the available data are described precisely by the first-order approximation of this expansion, i.e., by a "linear response" regime for the correction of size fluctuations. Hence, a single dimensionless parameter defines the strength and action of the division control against cell-to-cell variability (quantified by a single "noise" parameter). However, the same strength of linear response may emerge from several mechanisms, which are distinguished only by higher-order terms in the perturbative expansion. Our analytical estimate of the sample size needed to distinguish between second-order effects shows that this value is close to but larger than the values of the current datasets. These results provide a unified framework for future studies and clarify the relevant parameters at play in the control of

  4. Constrained optimisation of the parameters for a simple isostatic Moho model

    Science.gov (United States)

    Lane, R. J.

    2010-12-01

    of elevation / bathymetry values (H), Moho depth observation values from the seismic refraction soundings (Tm), the water density value (RHOw), and prior estimates and bounds for the output parameters. A number of different deterministic and stochastic inversion methods were used to derive solutions for the optimisation, enabling an evaluation of the uncertainty and sensitivity of the posterior estimates to be carried out. The output parameters that provided the scaling and vertical positioning of an isostatic model Moho surface that best fitted the seismic refraction Moho depths were found to be in general accord with parameters chosen by others when working in similar geological environments. A reasonable match between the Moho surfaces defined from seismic refraction and isostatic methods suggested that the use of an isostatic model assumption was valid in this instance. Further, the gravity response of the 3D geological map was found to match the observed gravity data after making relatively minor adjustments to the geometry of the Moho surface and the upper crustal basin thicknesses. It was thus concluded that the integrated regional 3D geological understanding of the upper crustal and Moho surfaces, and the related mass density contrasts across these units, was consistent with the observed gravity data.

  5. Sulfadiazine modified PDMS as a model material with the potential for the mitigation of posterior capsule opacification (PCO).

    Science.gov (United States)

    Amoozgar, Bahram; Morarescu, Diana; Sheardown, Heather

    2013-11-01

    Cataract surgery, while the most common surgical procedure performed, leads to posterior capsule opacification in approximately 30% of cases. Transforming growth factor beta 2 (TGF-β2) and matrix metalloproteinases (MMPs) have been shown to play important roles in the cellular processes leading to posterior capsule opacification. Delivery of inhibitors to MMPs may have the potential to inhibit the initial cascade of events that lead to PCO. However, delivery of these molecules via tethering has proven difficult. In this work, sulfadiazine was tethered to polydimethylsiloxane (PDMS) via a polyethylene glycol (PEG) spacer as a potential MMPI mimic. Surface characterization using a variety of methods demonstrated successful modification with the antibiotic. The surfaces were examined with lens epithelial cells to determine their effect on these cellular processes, including cell transdifferentiation and production of extracellular matrix components. The presence of TGF-β2 in the cell culture media was found to stimulate the production of ECM components such as collagen, fibronectin, and laminin, as well as alpha smooth muscle actin (α-SMA), and the migration marker Rho by HLE-B3 and FHL124 cells. In all cases, these effects were decreased but not completely eradicated by the presence of sulfadiazine on the PDMS surfaces. While the level of inhibition necessary for inhibition of PCO in vivo is unknown, these results suggest that IOL surface modification with sulfadiazine has the potential to reduce cellular changes associated with PCO. Furthermore, the results demonstrate for the first time that changes consistent with inhibition of fibrosis may be elicited by surfaces modified with sulfadiazine.

  6. Effect of partial and complete posterior cruciate ligament transection on medial meniscus: A biomechanical evaluation in a cadaveric model

    Directory of Open Access Journals (Sweden)

    Shu-guang Gao

    2013-01-01

    Full Text Available Background: The relationship between medial meniscus tear and posterior cruciate ligament (PCL injury has not been exactly explained. We studied to investigate the biomechanical effect of partial and complete PCL transection on different parts of medial meniscus at different flexion angles under static loading conditions. Materials and Methods: Twelve fresh human cadaveric knee specimens were divided into four groups: PCL intact (PCL-I, anterolateral bundle transection (ALB-T, posteromedial bundle transection (PMB-T and PCL complete transection (PCL-T group. Strain on the anterior horn, body part and posterior horn of medial meniscus were measured under different axial compressive tibial loads (200-800 N at 0°, 30°, 60° and 90° knee flexion in each groups respectively. Results: Compared with the PCL-I group, the PCL-T group had a higher strain on whole medial meniscus at 30°, 60° and 90° flexion in all loading conditions and at 0° flexion with 400, 600 and 800 N loads. In ALB-T group, strain on whole meniscus increased at 30°, 60° and 90° flexion under all loading conditions and at 0° flexion with 800 N only. PMB-T exihibited higher strain at 0° flexion with 400 N, 600 N and 800 N, while at 30° and 60° flexion with 800 N and at 90° flexion under all loading conditions. Conclusions: Partial PCL transection triggers strain concentration on medial meniscus and the effect is more pronounced with higher loading conditions at higher flexion angles.

  7. Modeling soil detachment capacity by rill flow using hydraulic parameters

    Science.gov (United States)

    Wang, Dongdong; Wang, Zhanli; Shen, Nan; Chen, Hao

    2016-04-01

    The relationship between soil detachment capacity (Dc) by rill flow and hydraulic parameters (e.g., flow velocity, shear stress, unit stream power, stream power, and unit energy) at low flow rates is investigated to establish an accurate experimental model. Experiments are conducted using a 4 × 0.1 m rill hydraulic flume with a constant artificial roughness on the flume bed. The flow rates range from 0.22 × 10-3 m2 s-1 to 0.67 × 10-3 m2 s-1, and the slope gradients vary from 15.8% to 38.4%. Regression analysis indicates that the Dc by rill flow can be predicted using the linear equations of flow velocity, stream power, unit stream power, and unit energy. Dc by rill flow that is fitted to shear stress can be predicted with a power function equation. Predictions based on flow velocity, unit energy, and stream power are powerful, but those based on shear stress, especially on unit stream power, are relatively poor. The prediction based on flow velocity provides the best estimates of Dc by rill flow because of the simplicity and availability of its measurements. Owing to error in measuring flow velocity at low flow rates, the predictive abilities of Dc by rill flow using all hydraulic parameters are relatively lower in this study compared with the results of previous research. The measuring accuracy of experiments for flow velocity should be improved in future research.

  8. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  9. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    Science.gov (United States)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  10. Adaptive Unified Biased Estimators of Parameters in Linear Model

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Li-xing Zhu

    2004-01-01

    To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.

  11. Evaluation of the perceptual grouping parameter in the CTVA model

    Directory of Open Access Journals (Sweden)

    Manuel Cortijo

    2005-01-01

    Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.

  12. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models: I. Parameter accuracy and benchmark stars

    CERN Document Server

    Passegger, Vera Maria; Reiners, Ansgar

    2016-01-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...

  13. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  14. Posterior Fossa Syndrome

    Directory of Open Access Journals (Sweden)

    Serhan Kupeli

    2014-08-01

    Full Text Available Posterior fossa syndrome is defined as the temporary and complete loss of speech after posterior fossa surgery which is not related to cerebellar hemorrhage, infection of the cerebellum, degenerative or neoplastic diseases of the cerebellum. In this review, we aimed to outline the incidence of posterior fossa syndrome, to define the risk factors for posterior fossa syndrome, to describe accompanying neurobehavioural and psychologic problems and to speculate about the etiologic mechanisms. The diagnosis of medulloblastoma and midline location of the tumor are important risk factors for the development of posterior fossa syndrome. These findings support the hypothesis that temporary ischemia and edema due to retracted and largely manipulated dentate nuclei and superior cerebellar pedincles may be the cause of mutism. Informing the family and the patient about the posterior fossa syndromemust be a component of the preoperative interview and patients who developed posterior fossa syndrome should be followed for accompanying neurobehavioural and psychologic problems even after mutism improved. [Archives Medical Review Journal 2014; 23(4.000: 636-657

  15. Modeling the outflow of liquid with initial supercritical parameters using the relaxation model for condensation

    Directory of Open Access Journals (Sweden)

    Lezhnin Sergey

    2017-01-01

    Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.

  16. Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters

    Directory of Open Access Journals (Sweden)

    Adeniyi Ganiyu Adeogun

    2015-10-01

    Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain.

  17. Application of ensemble kalman filter to geophysical parameters retrieval in remote sensing: A case study of kernel-driven BRDF model inversion

    Institute of Scientific and Technical Information of China (English)

    QIN Jun; YAN Guangjian; LIU Shaomin; LIANG Shunlin; ZHANG Hao; WANG Jindi; LI Xiaowen

    2006-01-01

    The use of a priori knowledge in remote sensing inversion has great implications for ensuring the stability of inversion process and reducing uncertainties in retrieved results, especially under the condition of insufficient observations. Common optimization algorithms have difficulties in providing posterior distribution and thus cannot directly acquire uncertainties in inversion results, which is of no benefit to remote sensing application. In this article, ensemble Kalman filter (EnKF) has been introduced to retrieve surface geophysical parameters from remote sensing observations, which has the capability of not merely obtaining inversion results but also giving its posterior distribution. To show the advantage of EnKF, it is compared to standard MODIS AMBRALS algorithm and highly efficient global optimization method SCE-UA. The inversion abilities of kernel-driven BRDF models with different kernel combinations at several main cover types are emphatically discussed when observations are deficient and a priori knowledge is introduced into inversion.

  18. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    Science.gov (United States)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  19. FEM numerical model study of electrosurgical dispersive electrode design parameters.

    Science.gov (United States)

    Pearce, John A

    2015-01-01

    Electrosurgical dispersive electrodes must safely carry the surgical current in monopolar procedures, such as those used in cutting, coagulation and radio frequency ablation (RFA). Of these, RFA represents the most stringent design constraint since ablation currents are often more than 1 to 2 Arms (continuous) for several minutes depending on the size of the lesion desired and local heat transfer conditions at the applicator electrode. This stands in contrast to standard surgical activations, which are intermittent, and usually less than 1 Arms, but for several seconds at a time. Dispersive electrode temperature rise is also critically determined by the sub-surface skin anatomy, thicknesses of the subcutaneous and supra-muscular fat, etc. Currently, we lack fundamental engineering design criteria that provide an estimating framework for preliminary designs of these electrodes. The lack of a fundamental design framework means that a large number of experiments must be conducted in order to establish a reasonable design. Previously, an attempt to correlate maximum temperatures in experimental work with the average current density-time product failed to yield a good match. This paper develops and applies a new measure of an electrode stress parameter that correlates well with both the previous experimental data and with numerical models of other electrode shapes. The finite element method (FEM) model work was calibrated against experimental RF lesions in porcine skin to establish the fundamental principle underlying dispersive electrode performance. The results can be used in preliminary electrode design calculations, experiment series design and performance evaluation.

  20. Modeling and parameter estimation for hydraulic system of excavator's arm

    Institute of Scientific and Technical Information of China (English)

    HE Qing-hua; HAO Peng; ZHANG Da-qing

    2008-01-01

    A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.

  1. Influences of parameter uncertainties within the ICRP 66 respiratory tract model: particle deposition.

    Science.gov (United States)

    Bolch, W E; Farfán, E B; Huh, C; Huston, T E; Bolch, W E

    2001-10-01

    Risk assessment associated with the inhalation of radioactive aerosols requires as an initial step the determination of particle deposition within the various anatomic regions of the respiratory tract. The model outlined in ICRP Publication 66 represents to date one of the most complete overall descriptions of not only particle deposition, but of particle clearance and local radiation dosimetry of lung tissues. In this study, a systematic review of the deposition component within the ICRP 66 respiratory tract model was conducted in which probability density functions were assigned to all input parameters. These distributions were subsequently incorporated within a computer code LUDUC (LUng Dose Uncertainty Code) in which Latin hypercube sampling techniques are used to generate multiple (e.g., 1,000) sets of input vectors (i.e., trials) for all of the model parameters needed to assess particle deposition within the extrathoracic (anterior and posterior), bronchial, bronchiolar, and alveolar-interstitial regions of the ICRP 66 respiratory tract model. Particle deposition values for the various trial simulations were shown to be well described by lognormal probability distributions. Geometric mean deposition fractions from LUDUC were found to be within approximately +/- 10% of the single-value estimates from the LUDEP computer code for each anatomic region and for particle diameters ranging from 0.001 to 50 microm. In all regions of the respiratory tract, LUDUC simulations for an adult male at light exertion show that uncertainties in particle deposition fractions are distributed only over a range of about a factor of approximately 2-4 for particle sizes between 0.005 to 0.2 microm. Below 0.005 microm, uncertainties increase only for deposition within the alveolar region. At particle sizes exceeding 1 microm, uncertainties in the deposition fraction within the extrathoracic regions are relatively small, but approach a factor of 20 for deposition in the bronchial

  2. An algorithm for the computation of posterior moments and densities using simple importance sampling

    NARCIS (Netherlands)

    H.K. van Dijk (Herman); J.P. Hop; A.S. Louter (Adri)

    1987-01-01

    textabstractIn earlier work (van Dijk, 1984, Chapter 3) one of the authors discussed the use of Monte Carlo integration methods for the computation of the multivariate integrals that are defined in the posterior moments and densities of the parameters of interest of econometric models. In the presen

  3. Cost-effectiveness of endoscopic sphenopalatine artery ligation versus nasal packing as first-line treatment for posterior epistaxis.

    Science.gov (United States)

    Dedhia, Raj C; Desai, Shamit S; Smith, Kenneth J; Lee, Stella; Schaitkin, Barry M; Snyderman, Carl H; Wang, Eric W

    2013-07-01

    The advent of endoscopic sphenopalatine artery ligation (ESPAL) for the control of posterior epistaxis provides an effective, low-morbidity treatment option. In the current practice algorithm, ESPAL is pursued after failure of posterior packing. Given the morbidity and limited effectiveness of posterior packing, we sought to determine the cost-effectiveness of first-line ESPAL compared to the current practice model. A standard decision analysis model was constructed comparing first-line ESPAL and current practice algorithms. A literature search was performed to determine event probabilities and published Medicare data largely provided cost parameters. The primary outcomes were cost of treatment and resolution of epistaxis. One-way sensitivity analysis was performed for key parameters. Costs for the first-line ESPAL arm and the current practice arm were $6450 and $8246, respectively. One-way sensitivity analyses were performed for key variables including duration of packing. The baseline difference of $1796 in favor of the first-line ESPAL arm was increased to $6263 when the duration of nasal packing was increased from 3 to 5 days. Current practice was favored (cost savings of $437 per patient) if posterior packing duration was decreased from 3 to 2 days. This study demonstrates that ESPAL is cost-saving as first-line therapy for posterior epistaxis. Given the improved effectiveness and patient comfort of ESPAL compared to posterior packing, ESPAL should be offered as an initial treatment option for medically stable patients with posterior epistaxis. © 2013 ARS-AAOA, LLC.

  4. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  5. Relationship between the CMB, SZ Cluster Counts, and Local Hubble Parameter Measurements in a Simple Void Model

    CERN Document Server

    Ichiki, Kiyotomo; Oguri, Masamune

    2015-01-01

    The discrepancy between the amplitudes of matter fluctuations inferred from Sunyaev-Zel'dovich (SZ) cluster number counts, the primary temperature, and the polarization anisotropies of the cosmic microwave background (CMB) measured by the Planck satellite can be reconciled if the local universe is embedded in an under-dense region as shown by Lee, 2014. Here using a simple void model assuming the open Friedmann-Robertson-Walker geometry and a Markov Chain Monte Carlo technique, we investigate how deep the local under-dense region needs to be to resolve this discrepancy. Such local void, if exists, predicts the local Hubble parameter value that is different from the global Hubble constant. We derive the posterior distribution of the local Hubble parameter from a joint fitting of the Planck CMB data and SZ cluster number counts assuming the simple void model. We show that the predicted local Hubble parameter value of $H_{\\rm loc}=70.1\\pm0.34~{\\rm km\\,s^{-1}Mpc^{-1}}$ is in better agreement with direct local Hub...

  6. Visual attention in posterior stroke

    DEFF Research Database (Denmark)

    Fabricius, Charlotte; Petersen, Anders; Iversen, Helle K

    Objective: Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere. However, attentional effects of more posterior lesions are less clear. The aim of this study was to characterize visual processing speed and appre......Objective: Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere. However, attentional effects of more posterior lesions are less clear. The aim of this study was to characterize visual processing speed...... and apprehension span following posterior cerebral artery (PCA) stroke. We also relate these attentional parameters to visual word recognition, as previous studies have suggested that reduced visual speed and span may explain pure alexia. Methods: Nine patients with MR-verified focal lesions in the PCA......-territory (four left PCA; four right PCA; one bilateral, all >1 year post stroke) were compared to 25 controls using single case statistics. Visual attention was characterized by a whole report paradigm allowing for hemifield-specific speed and span measurements. We also characterized visual field defects...

  7. Parameter selection and stochastic model updating using perturbation methods with parameter weighting matrix assignment

    Science.gov (United States)

    Abu Husain, Nurulakmar; Haddad Khodaparast, Hamed; Ouyang, Huajiang

    2012-10-01

    Parameterisation in stochastic problems is a major issue in real applications. In addition, complexity of test structures (for example, those assembled through laser spot welds) is another challenge. The objective of this paper is two-fold: (1) stochastic uncertainty in two sets of different structures (i.e., simple flat plates, and more complicated formed structures) is investigated to observe how updating can be adequately performed using the perturbation method, and (2) stochastic uncertainty in a set of welded structures is studied by using two parameter weighting matrix approaches. Different combinations of parameters are explored in the first part; it is found that geometrical features alone cannot converge the predicted outputs to the measured counterparts, hence material properties must be included in the updating process. In the second part, statistical properties of experimental data are considered and updating parameters are treated as random variables. Two weighting approaches are compared; results from one of the approaches are in very good agreement with the experimental data and excellent correlation between the predicted and measured covariances of the outputs is achieved. It is concluded that proper selection of parameters in solving stochastic updating problems is crucial. Furthermore, appropriate weighting must be used in order to obtain excellent convergence between the predicted mean natural frequencies and their measured data.

  8. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  9. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Science.gov (United States)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  10. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    2015-01-01

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  11. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  12. Computerized Adaptive Testing: A Comparison of the Nominal Response Model and the Three Parameter Logistic Model.

    Science.gov (United States)

    DeAyala, R. J.; Koch, William R.

    A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…

  13. Rock thermal conductivity as key parameter for geothermal numerical models

    Science.gov (United States)

    Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele

    2013-04-01

    The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity

  14. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    Science.gov (United States)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  15. Inducible mouse models illuminate parameters influencing epigenetic inheritance.

    Science.gov (United States)

    Wan, Mimi; Gu, Honggang; Wang, Jingxue; Huang, Haichang; Zhao, Jiugang; Kaundal, Ravinder K; Yu, Ming; Kushwaha, Ritu; Chaiyachati, Barbara H; Deerhake, Elizabeth; Chi, Tian

    2013-02-01

    Environmental factors can stably perturb the epigenome of exposed individuals and even that of their offspring, but the pleiotropic effects of these factors have posed a challenge for understanding the determinants of mitotic or transgenerational inheritance of the epigenetic perturbation. To tackle this problem, we manipulated the epigenetic states of various target genes using a tetracycline-dependent transcription factor. Remarkably, transient manipulation at appropriate times during embryogenesis led to aberrant epigenetic modifications in the ensuing adults regardless of the modification patterns, target gene sequences or locations, and despite lineage-specific epigenetic programming that could reverse the epigenetic perturbation, thus revealing extraordinary malleability of the fetal epigenome, which has implications for 'metastable epialleles'. However, strong transgenerational inheritance of these perturbations was observed only at transgenes integrated at the Col1a1 locus, where both activating and repressive chromatin modifications were heritable for multiple generations; such a locus is unprecedented. Thus, in our inducible animal models, mitotic inheritance of epigenetic perturbation seems critically dependent on the timing of the perturbation, whereas transgenerational inheritance additionally depends on the location of the perturbation. In contrast, other parameters examined, particularly the chromatin modification pattern and DNA sequence, appear irrelevant.

  16. Sequence-based Parameter Estimation for an Epidemiological Temporal Aftershock Forecasting Model using Markov Chain Monte Carlo Simulation

    Science.gov (United States)

    Jalayer, Fatemeh; Ebrahimian, Hossein

    2014-05-01

    Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the

  17. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  18. Parameter and state estimator for state space models.

    Science.gov (United States)

    Ding, Ruifeng; Zhuang, Linfan

    2014-01-01

    This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  19. Modeling the vertical soil organic matter profile using Bayesian parameter estimation

    Directory of Open Access Journals (Sweden)

    M. C. Braakhekke

    2013-01-01

    Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute an important factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperate forest soils: an Arenosol with a mor humus form (Loobos, the Netherlands, and a Cambisol with mull-type humus (Hainich, Germany. Furthermore, the use of the radioisotope 210Pbex as tracer for vertical SOM transport was studied. For Loobos, the calibration results demonstrate the importance of organic matter transport with the liquid phase for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich, the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of 210Pbex data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which indicated that root litter input is a dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The 210

  20. Modeling the vertical soil organic matter profile using Bayesian parameter estimation

    Directory of Open Access Journals (Sweden)

    M. C. Braakhekke

    2012-08-01

    Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute a significant factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperature forest soils: an Arenosol with a mor humus form (Loobos, The Netherlands, and a Cambisol with mull type humus (Hainich, Germany. Furthermore, the use of the radioisotope 210Pbex as tracer for vertical SOM transport was studied.

    For Loobos the calibration results demonstrate the importance of liquid phase transport for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of 210Pbex data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which identified root litter input as the dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The 210

  1. Parameter Identification on Lumped Parameters of the Hydraulic Engine Mount Model

    Directory of Open Access Journals (Sweden)

    Li Qian

    2016-01-01

    Full Text Available Hydraulic Engine Mounts (HEM are important vibration isolation components with compound structure in the vehicle powertrain mounting system. They have the characteristic that large damping and high dynamic stiffness in the high frequency region, and small damping and low dynamic stiffness in the low frequency region, which can meet the requirements of the vehicle powertrain mounting system better. The method to identify the lumped parameters of the HEM is not only the necessary work for the analysis and calculation in dynamic performance and can also provide the theory for the performance optimization and structure optimization of product in the future. The parameter identification method based on coupled fluid-structure interaction (FSI and finite element analysis (FEA was established in this study to identify the equivalent piston area of the rubber spring, the volume stiffness of the upper chamber, as well as the inertia coefficient and damping coefficient of the liquid through the inertia track. The simulated dynamic characteristic curves of the HEM with the parameters identified are in accordance with the measured dynamic characteristic curves well.

  2. Effective Parameter Dimension via Bayesian Model Selection in the Inverse Acoustic Scattering Problem

    Directory of Open Access Journals (Sweden)

    Abel Palafox

    2014-01-01

    Full Text Available We address a prototype inverse scattering problem in the interface of applied mathematics, statistics, and scientific computing. We pose the acoustic inverse scattering problem in a Bayesian inference perspective and simulate from the posterior distribution using MCMC. The PDE forward map is implemented using high performance computing methods. We implement a standard Bayesian model selection method to estimate an effective number of Fourier coefficients that may be retrieved from noisy data within a standard formulation.

  3. A Note on the Item Information Function of the Four-Parameter Logistic Model

    Science.gov (United States)

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  4. A Note on the Item Information Function of the Four-Parameter Logistic Model

    Science.gov (United States)

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  5. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    Science.gov (United States)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  6. Dynamic hydrologic modeling using the zero-parameter Budyko model with instantaneous dryness index

    Science.gov (United States)

    Biswal, Basudev

    2016-09-01

    Long-term partitioning of hydrologic quantities is achieved by using the zero-parameter Budyko model which defines a dryness index. However, this approach is not suitable for dynamic partitioning particularly at diminishing timescales, and therefore, a universally applicable zero-parameter model remains elusive. Here an instantaneous dryness index is proposed which enables dynamic hydrologic modeling using the Budyko model. By introducing a "decay function" that characterizes the effects of antecedent rainfall and solar energy on the dryness state of a basin at a time, I propose the concept of instantaneous dryness index and use the Budyko function to perform continuous hydrologic partitioning. Using the same decay function, I then obtain discharge time series from the effective rainfall time series. The model is evaluated by considering data form 63 U.S. Geological Survey basins. Results indicate the possibility of using the proposed framework as an alternative platform for prediction in ungagued basins.

  7. Output feedback robust model predictive control with unmeasurable model parameters and bounded disturbance☆

    Institute of Scientific and Technical Information of China (English)

    Baocang Ding; Hongguang Pan

    2016-01-01

    The output feedback model predictive control (MPC), for a linear parameter varying (LPV) process system including unmeasurable model parameters and disturbance (all lying in known polytopes), is considered. Some previously developed tools, including the norm-bounding technique for relaxing the disturbance-related constraint handling, the dynamic output feedback law, the notion of quadratic boundedness for specifying the closed-loop stability, and the el ipsoidal state estimation error bound for guaranteeing the recursive feasibility, are merged in the control design. Some previous approaches are shown to be the special cases. An example of continuous stirred tank reactor (CSTR) is given to show the effectiveness of the proposed approaches.

  8. Spondylolisthesis and Posterior Instability

    Energy Technology Data Exchange (ETDEWEB)

    Niggemann, P.; Beyer, H.K.; Frey, H.; Grosskurth, D. (Privatpraxis fuer Upright MRT, Koeln (Germany)); Simons, P.; Kuchta, J. (Media Park Klinik, Koeln (Germany))

    2009-04-15

    We present the case of a patient with a spondylolisthesis of L5 on S1 due to spondylolysis at the level L5/S1. The vertebral slip was fixed and no anterior instability was found. Using functional magnetic resonance imaging (MRI) in an upright MRI scanner, posterior instability at the level of the spondylolytic defect of L5 was demonstrated. A structure, probably the hypertrophic ligament flava, arising from the spondylolytic defect was displaced toward the L5 nerve root, and a bilateral contact of the displaced structure with the L5 nerve root was shown in extension of the spine. To our knowledge, this is the first case described of posterior instability in patients with spondylolisthesis. The clinical implications of posterior instability are unknown; however, it is thought that this disorder is common and that it can only be diagnosed using upright MRI.

  9. Evidence for extra radiation? Profile likelihood versus Bayesian posterior

    CERN Document Server

    Hamann, Jan

    2011-01-01

    A number of recent analyses of cosmological data have reported hints for the presence of extra radiation beyond the standard model expectation. In order to test the robustness of these claims under different methods of constructing parameter constraints, we perform a Bayesian posterior-based and a likelihood profile-based analysis of current data. We confirm the presence of a slight discrepancy between posterior- and profile-based constraints, with the marginalised posterior preferring higher values of the effective number of neutrino species N_eff. This can be traced back to a volume effect occurring during the marginalisation process, and we demonstrate that the effect is related to the fact that cosmic microwave background (CMB) data constrain N_eff only indirectly via the redshift of matter-radiation equality. Once present CMB data are combined with external information about, e.g., the Hubble parameter, the difference between the methods becomes small compared to the uncertainty of N_eff. We conclude tha...

  10. MODEL JARINGAN SYARAF TIRUAN UNTUK MEMPREDIKSI PARAMETER KUALITAS TOMAT BERDASARKAN PARAMETER WARNA RGB (An artificial neural network model for predicting tomato quality parameters based on color

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2013-03-01

    Full Text Available Artificial neural networks (ANN was used to predict the quality parameters of tomato, i.e. Brix, citric acid, total carotene, and vitamin C. ANN was developed from Red Green Blue (RGB image data of tomatoes measured using a developed computer vision system (CVS. Qualitative analysis of tomato compositions were obtained from laboratory experiments. ANN model was based on a feedforward backpropagation network with different training functions, namely gradient descent (traingd, gradient descent with the resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab and Shanno (BFGS quasi-Newton (trainbfg, as well as Levenberg Marquardt (trainlm.  The network structure using logsig and linear (purelin activation function at the hidden and output layer, respectively, and using  the trainlm as a training function resulted in the best performance. Correlation coefficient (r of training and validation process were 0.97 - 0.99 and 0.92 - 0.99, whereas the MAE values ​​ranged from 0.01 to 0.23 and 0.03 to 0.59, respectively. Keywords: Artificial neural network, trainlm, tomato, RGB   Jaringan syaraf tiruan (JST digunakan untuk memprediksi parameter kualitas tomat, yaitu Brix, asam sitrat, karoten total, dan vitamin C. JST dikembangkan dari data Red Green Blue (RGB  citra tomat yang diukur menggunakan computer vision system. Data kualitas tomat diperoleh dari analisis di laboratorium. Struktur model JST didasarkan pada jaringan feedforward backpropagation dengan berbagai fungsi pelatihan, yaitu gradient descent (traingd, gradient descent dengan resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab dan Shanno (BFGS quasi-Newton (trainbfg, serta Levenberg Marquardt (trainlm. Fungsi pelatihan yang terbaik adalah menggunakan trainlm, serta pada struktur jaringan digunakan fungsi aktivasi logsig pada lapisan tersembunyi dan linier (purelin pada lapisan keluaran. dengan 1000 epoch. Nilai koefisien korelasi (r pada tahap pelatihan dan validasi

  11. The ILIUM forward modelling algorithm for multivariate parameter estimation and its application to derive stellar parameters from Gaia spectrophotometry

    CERN Document Server

    Bailer-Jones, C A L

    2009-01-01

    I introduce an algorithm for estimating parameters from multidimensional data based on forward modelling. In contrast to many machine learning approaches it avoids fitting an inverse model and the problems associated with this. The algorithm makes explicit use of the sensitivities of the data to the parameters, with the goal of better treating parameters which only have a weak impact on the data. The forward modelling approach provides uncertainty (full covariance) estimates in the predicted parameters as well as a goodness-of-fit for observations. I demonstrate the algorithm, ILIUM, with the estimation of stellar astrophysical parameters (APs) from simulations of the low resolution spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with that obtained by a support vector machine. For example, for zero extinction stars covering a wide range of metallicity, surface gravity and temperature, ILIUM can estimate Teff to an accuracy of 0.3% at G=15 and to 4% for (lower signal-to-noise ratio) sp...

  12. Checking the new IRI model The bottomside B parameters

    CERN Document Server

    Mosert, M; Ezquer, R; Lazo, B; Miro, G

    2002-01-01

    Electron density profiles obtained at Pruhonice (50.0, 15.0), El Arenosillo (37.1, 353.2) and Havana (23, 278) were used to check the bottom-side B parameters BO (thickness parameter) and B1 (shape parameter) predicted by the new IRI - 2000 version. The electron density profiles were derived from ionograms using the ARP technique. The data base includes daytime and nighttime ionograms recorded under different seasonal and solar activity conditions. Comparisons with IRI predictions were also done. The analysis shows that: a) The parameter B1 given by IRI 2000 reproduces better the observed ARP values than the IRI-90 version and b) The observed BO values are in general well reproduced by both IRI versions: IRI-90 and IRI-2000.

  13. Exploring Factor Model Parameters across Continuous Variables with Local Structural Equation Models.

    Science.gov (United States)

    Hildebrandt, Andrea; Lüdtke, Oliver; Robitzsch, Alexander; Sommer, Christopher; Wilhelm, Oliver

    2016-01-01

    Using an empirical data set, we investigated variation in factor model parameters across a continuous moderator variable and demonstrated three modeling approaches: multiple-group mean and covariance structure (MGMCS) analyses, local structural equation modeling (LSEM), and moderated factor analysis (MFA). We focused on how to study variation in factor model parameters as a function of continuous variables such as age, socioeconomic status, ability levels, acculturation, and so forth. Specifically, we formalized the LSEM approach in detail as compared with previous work and investigated its statistical properties with an analytical derivation and a simulation study. We also provide code for the easy implementation of LSEM. The illustration of methods was based on cross-sectional cognitive ability data from individuals ranging in age from 4 to 23 years. Variations in factor loadings across age were examined with regard to the age differentiation hypothesis. LSEM and MFA converged with respect to the conclusions. When there was a broad age range within groups and varying relations between the indicator variables and the common factor across age, MGMCS produced distorted parameter estimates. We discuss the pros of LSEM compared with MFA and recommend using the two tools as complementary approaches for investigating moderation in factor model parameters.

  14. Posterior tracheal diverticulosis.

    Science.gov (United States)

    Madan, Karan; Das, Chandan J; Guleria, Randeep

    2014-10-01

    Multiple tracheal diverticulosis is a rare clinical entity. Tracheal diverticula are usually recognized radiologically as solitary right paratracheal air collections on thoracic computed tomography examination. They are usually asymptomatic but can occasionally present with persistent symptoms. We herein report the case of a 50-year-old male patient who underwent extensive evaluation for persistent cough. Multiple posterior right paratracheal air collections were recognized on thoracic multidetector computed tomography examination, which was confirmed as multiple-acquired posterior upper tracheal diverticula on flexible bronchoscopy. The patient improved with conservative medical management.

  15. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    Science.gov (United States)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  16. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Science.gov (United States)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  17. High-dimensional posterior exploration of hydrologic models using multiple-try DREAM(ZS) and high-performance computing

    NARCIS (Netherlands)

    Laloy, E.; Vrugt, J.A.

    2012-01-01

    Spatially distributed hydrologic models are increasingly being used to study and predict soil moisture flow, groundwater recharge, surface runoff, and river discharge. The usefulness and applicability of such complex models is increasingly held back by the potentially many hundreds (thousands) of

  18. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  19. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    Science.gov (United States)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  20. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    Science.gov (United States)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  1. A finite element modeling of posterior atlantoaxial fixation and biomechanical analysis of C2 intralaminar screw fixation

    Institute of Scientific and Technical Information of China (English)

    Ma Xuexiao; Peng Xianbo; Xiang Hongfei; Zhang Yan; Zhang Guoqing; Chen Bohua

    2014-01-01

    Background The objective of this study was to use three-dimensional finite element (FE) models to analyze the stability and the biomechanics of two upper cervical fixation methods:the C2 intralaminar screw method and the C2 pedicle screw method.Methods From computed tomography images,a nonlinear three-dimensional FE model from C0 (occiput) to C3 was developed with anatomic detail.The C2 intralaminar screw and the C2 pedicle screw systems were added to the model,in parallel to establish the interlaminar model and the pedicle model.The two models were operated with all possible states of motion and physiological loads to simulate normal movement.Results Both the C2 intralaminar screw method and the C2 pedicle screw method significantly reduced motion compared with the intact model.There were no statistically significant differences between the two methods.The Von Mises stresses of the internal and external laminar walls were similar between the two methods.Stability was also similar.Conclusions The C2 intralaminar screw method can complement but cannot completely replace the C2 pedicle screw method.Clinicians would need to assess and decide which approach to adopt for the best therapeutic effect.

  2. Ionospheric parameter modelling and anomaly discovery by combining the wavelet transform with autoregressive models

    Directory of Open Access Journals (Sweden)

    Oksana V. Mandrikova

    2015-11-01

    Full Text Available The paper is devoted to new mathematical tools for ionospheric parameter analysis and anomaly discovery during ionospheric perturbations. The complex structure of processes under study, their a-priori uncertainty and therefore the complex structure of registered data require a set of techniques and technologies to perform mathematical modelling, data analysis, and to make final interpretations. We suggest a technique of ionospheric parameter modelling and analysis based on combining the wavelet transform with autoregressive integrated moving average models (ARIMA models. This technique makes it possible to study ionospheric parameter changes in the time domain, make predictions about variations, and discover anomalies caused by high solar activity and lithospheric processes prior to and during strong earthquakes. The technique was tested on critical frequency foF2 and total electron content (TEC datasets from Kamchatka (a region in the Russian Far East and Magadan (a town in the Russian Far East. The mathematical models introduced in the paper facilitated ionospheric dynamic mode analysis and proved to be efficient for making predictions with time advance equal to 5 hours. Ionospheric anomalies were found using model error estimates, those anomalies arising during increased solar activity and strong earthquakes in Kamchatka.

  3. Posterior Urethral Valves

    Directory of Open Access Journals (Sweden)

    Steve J. Hodges

    2009-01-01

    Full Text Available The most common cause of lower urinary tract obstruction in male infants is posterior urethral valves. Although the incidence has remained stable, the neonatal mortality for this disorder has improved due to early diagnosis and intensive neonatal care, thanks in part to the widespread use of prenatal ultrasound evaluations. In fact, the most common reason for the diagnosis of posterior urethral valves presently is the evaluation of infants for prenatal hydronephrosis. Since these children are often diagnosed early, the urethral obstruction can be alleviated rapidly through catheter insertion and eventual surgery, and their metabolic derangements can be normalized without delay, avoiding preventable infant mortality. Of the children that survive, however, early diagnosis has not had much effect on their long-term prognosis, as 30% still develop renal insufficiency before adolescence. A better understanding of the exact cause of the congenital obstruction of the male posterior urethra, prevention of postnatal bladder and renal injury, and the development of safe methods to treat urethral obstruction prenatally (and thereby avoiding the bladder and renal damage due to obstructive uropathy are the goals for the care of children with posterior urethral valves[1].

  4. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    Science.gov (United States)

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  5. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  6. Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D

    Institute of Scientific and Technical Information of China (English)

    KONG Li-feng; ZHAO Hai-ying; XU Guang-mei

    2014-01-01

    Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.

  7. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    Science.gov (United States)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  8. On Model Complexity and Parameter Regionalization for Continental Scale Hydrologic Simulations

    Science.gov (United States)

    Rakovec, O.; Mizukami, N.; Newman, A. J.; Thober, S.; Kumar, R.; Wood, A.; Clark, M. P.; Samaniego, L. E.

    2016-12-01

    Assessing hydrologic model complexity and performing continental-domain model simulations has become an important objective in contemporary hydrology. We present a large-sample hydrologic modeling study to better understand (1) the benefits of parameter regionalization schemes, (2) the effects of spatially distributed/lumped model structures, and (3) the importance of selected hydrological processes on model performance. Four hydrological/land surface models (mHM, SAC, VIC, Noah-MP) are set up for 500 small to medium-sized unimpaired basins over the contiguous United States for two spatial scales: lumped and 12km grid. We performed model calibration at individual basins with and without parameter regionalization. For parameter regionalization, we use the well-established Multiscale Parameter Regionalization (MPR) technique, with the specific goal of assessing the transferability of model parameters across different time periods (from calibration to validation period), spatial scales (lumped basin scale to distributed) and locations, for different models. Our results reveal that large inter-model differences are dominated by the choice of model specific hydrological processes (in particular snow and soil moisture) over the choice of spatial discretization and/or parameter regionalization schemes. Nevertheless, parameter regionalization is crucial for parameter transferability across scale and to un-gauged locations. Last but not least, we observe that calibration of model parameters cannot always compensate for the choice of model structure.

  9. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    Science.gov (United States)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  10. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    Science.gov (United States)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  11. Comparative Study of Various SDLC Models on Different Parameters

    Directory of Open Access Journals (Sweden)

    Prateek Sharma

    2015-04-01

    Full Text Available The success of a software development project greatly depends upon which process model is used. This paper emphasizes on the need of using appropriate model as per the application to be developed. In this paper we have done the comparative study of the following software models namely Waterfall, Prototype, RAD (Rapid Application Development Incremental, Spiral, Build and Fix and V-shaped. Our aim is to create reliable and cost effective software and these models provide us a way to develop them. The main objective of this research is to represent different models of software development and make a comparison between them to show the features of each model.

  12. Parameter estimation for the subcritical Heston model based on discrete time observations

    OpenAIRE

    2014-01-01

    We study asymptotic properties of some (essentially conditional least squares) parameter estimators for the subcritical Heston model based on discrete time observations derived from conditional least squares estimators of some modified parameters.

  13. Three-dimensional FEM model of FBGs in PANDA fibers with experimentally determined model parameters

    Science.gov (United States)

    Lindner, Markus; Hopf, Barbara; Koch, Alexander W.; Roths, Johannes

    2017-04-01

    A 3D-FEM model has been developed to improve the understanding of multi-parameter sensing with Bragg gratings in attached or embedded polarization maintaining fibers. The material properties of the fiber, especially Young's modulus and Poisson's ratio of the fiber's stress applying parts, are crucial for accurate simulations, but are usually not provided by the manufacturers. A methodology is presented to determine the unknown parameters by using experimental characterizations of the fiber and iterative FEM simulations. The resulting 3D-Model is capable of describing the change in birefringence of the free fiber when exposed to longitudinal strain. In future studies the 3D-FEM model will be employed to study the interaction of PANDA fibers with the surrounding materials in which they are embedded.

  14. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    Science.gov (United States)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  15. Numerical Modeling of Piezoelectric Transducers Using Physical Parameters

    NARCIS (Netherlands)

    Cappon, H.; Keesman, K.J.

    2012-01-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimen

  16. Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model

    Science.gov (United States)

    Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan

    2016-12-01

    Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.

  17. Parameter estimation for LLDPE gas-phase reactor models

    Directory of Open Access Journals (Sweden)

    G. A. Neumann

    2007-06-01

    Full Text Available Product development and advanced control applications require models with good predictive capability. However, in some cases it is not possible to obtain good quality phenomenological models due to the lack of data or the presence of important unmeasured effects. The use of empirical models requires less investment in modeling, but implies the need for larger amounts of experimental data to generate models with good predictive capability. In this work, nonlinear phenomenological and empirical models were compared with respect to their capability to predict the melt index and polymer yield of a low-density polyethylene production process consisting of two fluidized bed reactors connected in series. To adjust the phenomenological model, the optimization algorithms based on the flexible polyhedron method of Nelder and Mead showed the best efficiency. To adjust the empirical model, the PLS model was more appropriate for polymer yield, and the melt index needed more nonlinearity like the QPLS models. In the comparison between these two types of models better results were obtained for the empirical models.

  18. Updating parameters of the chicken processing line model

    DEFF Research Database (Denmark)

    Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna

    2010-01-01

    A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... of the chicken processing line model....

  19. The roles of prefrontal and posterior parietal cortex in algebra problem solving: a case of using cognitive modeling to inform neuroimaging data.

    Science.gov (United States)

    Danker, Jared F; Anderson, John R

    2007-04-15

    In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.

  20. Predicting nitrate discharge dynamics in mesoscale catchments using the lumped StreamGEM model and Bayesian parameter inference

    Science.gov (United States)

    Woodward, Simon James Roy; Wöhling, Thomas; Rode, Michael; Stenger, Roland

    2017-09-01

    The common practice of infrequent (e.g., monthly) stream water quality sampling for state of the environment monitoring may, when combined with high resolution stream flow data, provide sufficient information to accurately characterise the dominant nutrient transfer pathways and predict annual catchment yields. In the proposed approach, we use the spatially lumped catchment model StreamGEM to predict daily stream flow and nitrate concentration (mg L-1 NO3-N) in four contrasting mesoscale headwater catchments based on four years of daily rainfall, potential evapotranspiration, and stream flow measurements, and monthly or daily nitrate concentrations. Posterior model parameter distributions were estimated using the Markov Chain Monte Carlo sampling code DREAMZS and a log-likelihood function assuming heteroscedastic, t-distributed residuals. Despite high uncertainty in some model parameters, the flow and nitrate calibration data was well reproduced across all catchments (Nash-Sutcliffe efficiency against Log transformed data, NSL, in the range 0.62-0.83 for daily flow and 0.17-0.88 for nitrate concentration). The slight increase in the size of the residuals for a separate validation period was considered acceptable (NSL in the range 0.60-0.89 for daily flow and 0.10-0.74 for nitrate concentration, excluding one data set with limited validation data). Proportions of flow and nitrate discharge attributed to near-surface, fast seasonal groundwater and slow deeper groundwater were consistent with expectations based on catchment geology. The results for the Weida Stream in Thuringia, Germany, using monthly as opposed to daily nitrate data were, for all intents and purposes, identical, suggesting that four years of monthly nitrate sampling provides sufficient information for calibration of the StreamGEM model and prediction of catchment dynamics. This study highlights the remarkable effectiveness of process based, spatially lumped modelling with commonly available monthly

  1. Parameter identification and calibration of the Xin'anjiang model using the surrogate modeling approach

    Science.gov (United States)

    Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen

    2014-06-01

    Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation

  2. Assessing models for parameters of the Ångström-Prescott formula in China

    DEFF Research Database (Denmark)

    Liu, Xiaoying; Xu, Yinlong; Zhong, Xiuli

    2012-01-01

    Application of the Ångström–Prescott (A–P) model, one of the best rated global solar irradiation (Rs) models based on sunshine, is often limited by the lack of model parameters. Increasing the availability of its parameters in the absence of Rs measurement provides an effective way to overcome...... this problem. Although some models relating the A–P parameters to other variables have been developed, they generally lack worldwide validity test. Using data from 80 sites covering three agro-climatic zones in China, we evaluated seven models that relate the parameters to annual average of relative sunshine...

  3. Monoenergetic electron parameters in a spheroid bubble model

    Institute of Scientific and Technical Information of China (English)

    H.Sattarian; Sh.Rahmatallahpur; T.Tohidi

    2013-01-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived.The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal models and it explains the mono-energetic electron trajectory more accurately,especially at the relativistic region.The maximum energy of electrons is calculated and it is shown that the maximum energy of the spheroid model is less than that of the spherical model.The electron energy spectrum is also calculated and it is found that the energy distribution ratio of electrons △E/E for the spheroid model under the conditions reported here is half that of the spherical model and it is in good agreement with the experimental value in the same conditions.As a result,the quasi-mono-energetic electron output beam interacting with the laser plasma can be more appropriately described with this model.

  4. Modeling Subducting Slabs: Structural Variations due to Thermal Models, Latent Heat Feedback, and Thermal Parameter

    Science.gov (United States)

    Marton, F. C.

    2001-12-01

    The thermal, mineralogical, and buoyancy structures of thermal-kinetic models of subducting slabs are highly dependent upon a number of parameters, especially if the metastable persistence of olivine in the transition zone is investigated. The choice of starting thermal model for the lithosphere, whether a cooling halfspace (HS) or plate model, can have a significant effect, resulting in metastable wedges of olivine that differ in size by up to two to three times for high values of the thermal parameter (ǎrphi). Moreover, as ǎrphi is the product of the age of the lithosphere at the trench, convergence rate, and dip angle, slabs with similar ǎrphis can show great variations in structures as these constituents change. This is especially true for old lithosphere, as the lithosphere continually cools and thickens with age for HS models, but plate models, with parameters from Parson and Sclater [1977] (PS) or Stein and Stein [1992] (GDH1), achieve a thermal steady-state and constant thickness in about 70 My. In addition, the latent heats (q) of the phase transformations of the Mg2SiO4 polymorphs can also have significant effects in the slabs. Including q feedback in models raises the temperature and reduces the extent of metastable olivine, causing the sizes of the metastable wedges to vary by factors of up to two times. The effects of the choice of thermal model, inclusion and non-inclusion of q feedback, and variations in the constituents of ǎrphi are investigated for several model slabs.

  5. Parameter Extraction for PSpice Models by means of an Automated Optimization Tool – An IGBT model Study Case

    DEFF Research Database (Denmark)

    Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;

    2016-01-01

    An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavio...

  6. Target Rotations and Assessing the Impact of Model Violations on the Parameters of Unidimensional Item Response Theory Models

    Science.gov (United States)

    Reise, Steven; Moore, Tyler; Maydeu-Olivares, Alberto

    2011-01-01

    Reise, Cook, and Moore proposed a "comparison modeling" approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with…

  7. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    Science.gov (United States)

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.

  8. An approach to measure parameter sensitivity in watershed hydrologic modeling

    Data.gov (United States)

    U.S. Environmental Protection Agency — Abstract Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier...

  9. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.;

    2010-01-01

    investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we......, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit...

  10. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    Science.gov (United States)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  11. Material properties of the human posterior knee capsule

    NARCIS (Netherlands)

    Rachmat, H.H.; Janssen, D.W.; van Tienen, T.; Diercks, R.L.; Verkerke, Gijsbertus Jacob; Verdonschot, Nicolaas Jacobus Joseph; Fernandes, Paulo; Folgado, Joao; Silva, Miguel

    2012-01-01

    BACKGROUND: There is considerable interest to develop accurate subject-specific biomechanical models of the knee. Most of the existing models currently do not include a representation of the posterior knee capsule. In order to incorporate the posterior capsule in knee models, data is needed on its

  12. A strategy for “constraint-based” parameter specification for environmental models

    NARCIS (Netherlands)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Fenicia, F.; Gupta, H.V.; Savenije, H.H.G.

    2013-01-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available. Here, we explore another way to constrain the parameter values of s

  13. THE RELATIONS BETWEEN MODEL PARAMETERS AND CERTAIN PHENOMENA IN TRAFFIC FLOW

    Institute of Scientific and Technical Information of China (English)

    OU Zhong-hui; TAO Ming-de; WU Zheng

    2004-01-01

    Based on the dimensionless dynamic model of traffic flow, the model parameters were compared with numerically simulating solutions, and the effects of the former on the latter was investigated. Some relations between the parameters were obtained. Investigation several idealized results from dimensionless dynamic model of traffic flow were concluded.

  14. An Extension of the Rasch Model for Ratings Providing Both Location and Dispersion Parameters.

    Science.gov (United States)

    Andrich, David

    1982-01-01

    An elaboration of a psychometric model for rated data, which belongs to the class of Rasch models, is shown to provide a model with two parameters, one characterizing location and one characterizing dispersion. Characteristics of the dispersion parameter are discussed. (Author/JKS)

  15. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    Science.gov (United States)

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  16. Nonlinear model predictive control using parameter varying BP-ARX combination model

    Science.gov (United States)

    Yang, J.-F.; Xiao, L.-F.; Qian, J.-X.; Li, H.

    2012-03-01

    A novel back-propagation AutoRegressive with eXternal input (BP-ARX) combination model is constructed for model predictive control (MPC) of MIMO nonlinear systems, whose steady-state relation between inputs and outputs can be obtained. The BP neural network represents the steady-state relation, and the ARX model represents the linear dynamic relation between inputs and outputs of the nonlinear systems. The BP-ARX model is a global model and is identified offline, while the parameters of the ARX model are rescaled online according to BP neural network and operating data. Sequential quadratic programming is employed to solve the quadratic objective function online, and a shift coefficient is defined to constrain the effect time of the recursive least-squares algorithm. Thus, a parameter varying nonlinear MPC (PVNMPC) algorithm that responds quickly to large changes in system set-points and shows good dynamic performance when system outputs approach set-points is proposed. Simulation results in a multivariable stirred tank and a multivariable pH neutralisation process illustrate the applicability of the proposed method and comparisons of the control effect between PVNMPC and multivariable recursive generalised predictive controller are also performed.

  17. Accurate Critical Parameters for the Modified Lennard-Jones Model

    Science.gov (United States)

    Okamoto, Kazuma; Fuchizaki, Kazuhiro

    2017-03-01

    The critical parameters of the modified Lennard-Jones system were examined. The isothermal-isochoric ensemble was generated by conducting a molecular dynamics simulation for the system consisting of 6912, 8788, 10976, and 13500 particles. The equilibrium between the liquid and vapor phases was judged from the chemical potential of both phases upon establishing the coexistence envelope, from which the critical temperature and density were obtained invoking the renormalization group theory. The finite-size scaling enabled us to finally determine the critical temperature, pressure, and density as Tc = 1.0762(2), pc = 0.09394(17), and ρc = 0.331(3), respectively.

  18. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for th...

  19. Direct Estimation of Physical Parameters in Nonlinear Loudspeaker Models

    DEFF Research Database (Denmark)

    Knudsen, Morten

    1994-01-01

    For better loudspeaker unit and loudspeaker system design, improvements of the traditional linear, low frequency model of the electro-dynamic loudspeaker are essential.......For better loudspeaker unit and loudspeaker system design, improvements of the traditional linear, low frequency model of the electro-dynamic loudspeaker are essential....

  20. Mathematical modelling in blood coagulation : simulation and parameter estimation

    NARCIS (Netherlands)

    W.J.H. Stortelder (Walter); P.W. Hemker (Piet); H.C. Hemker

    1997-01-01

    textabstractThis paper describes the mathematical modelling of a part of the blood coagulation mechanism. The model includes the activation of factor X by a purified enzyme from Russel's Viper Venom (RVV), factor V and prothrombin, and also comprises the inactivation of the products formed. In this

  1. Model parameter uncertainty analysis for an annual field-scale P loss model

    Science.gov (United States)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model

  2. The ILIUM forward modelling algorithm for multivariate parameter estimation and its application to derive stellar parameters from Gaia spectrophotometry

    Science.gov (United States)

    Bailer-Jones, C. A. L.

    2010-03-01

    I introduce an algorithm for estimating parameters from multidimensional data based on forward modelling. It performs an iterative local search to effectively achieve a non-linear interpolation of a template grid. In contrast to many machine-learning approaches, it avoids fitting an inverse model and the problems associated with this. The algorithm makes explicit use of the sensitivities of the data to the parameters, with the goal of better treating parameters which only have a weak impact on the data. The forward modelling approach provides uncertainty (full covariance) estimates in the predicted parameters as well as a goodness-of-fit for observations, thus providing a simple means of identifying outliers. I demonstrate the algorithm, ILIUM, with the estimation of stellar astrophysical parameters (APs) from simulations of the low-resolution spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with that obtained by a support vector machine. For zero extinction stars covering a wide range of metallicity, surface gravity and temperature, ILIUM can estimate Teff to an accuracy of 0.3 per cent at G = 15 and to 4 per cent for (lower signal-to-noise ratio) spectra at G = 20, the Gaia limiting magnitude (mean absolute errors are quoted). [Fe/H] and logg can be estimated to accuracies of 0.1-0.4dex for stars with G <= 18.5, depending on the magnitude and what priors we can place on the APs. If extinction varies a priori over a wide range (0-10mag) - which will be the case with Gaia because it is an all-sky survey - then logg and [Fe/H] can still be estimated to 0.3 and 0.5dex, respectively, at G = 15, but much poorer at G = 18.5. Teff and AV can be estimated quite accurately (3-4 per cent and 0.1-0.2mag, respectively, at G = 15), but there is a strong and ubiquitous degeneracy in these parameters which limits our ability to estimate either accurately at faint magnitudes. Using the forward model, we can map these degeneracies (in advance) and thus

  3. Viscoelastic Parameter Model of Magnetorheological Elastomers Based on Abel Dashpot

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2014-04-01

    Full Text Available In this paper, a parametric constitutive model based on Abel dashpot is established in a simple form and with clear physical meaning to deduce the expression of dynamic mechanical modulus of MREs. Meanwhile, in consideration for the pressure stress on MREs in the experiment of shear mechanical properties or the application to vibration damper, some improvements are made on the particle chain model based on the coupled field. In addition, in order to verify the accuracy of the overall model, five groups of MREs samples based on silicone rubber with different volume fractions are prepared and the MCR51 rheometer is used to conduct the experiment of dynamic mechanical properties based on frequency and magnetic field scanning. Finally, experimental results indicate that the established model fits well with laboratory data; namely, the relationship between the dynamic modulus of MREs and changes in frequency and magnetic field is well described by the model.

  4. Water quality model parameter identification of an open channel in a long distance water transfer project based on finite difference, difference evolution and Monte Carlo.

    Science.gov (United States)

    Shao, Dongguo; Yang, Haidong; Xiao, Yi; Liu, Biyu

    2014-01-01

    A new method is proposed based on the finite difference method (FDM), differential evolution algorithm and Markov Chain Monte Carlo (MCMC) simulation to identify water quality model parameters of an open channel in a long distance water transfer project. Firstly, this parameter identification problem is considered as a Bayesian estimation problem and the forward numerical model is solved by FDM, and the posterior probability density function of the parameters is deduced. Then these parameters are estimated using a sampling method with differential evolution algorithm and MCMC simulation. Finally this proposed method is compared with FDM-MCMC by a twin experiment. The results show that the proposed method can be used to identify water quality model parameters of an open channel in a long distance water transfer project under different scenarios better with fewer iterations, higher reliability and anti-noise capability compared with FDM-MCMC. Therefore, it provides a new idea and method to solve the traceability problem in sudden water pollution accidents.

  5. Biomechanical evaluation of predictive parameters of progression in adolescent isthmic spondylolisthesis: a computer modeling and simulation study

    Directory of Open Access Journals (Sweden)

    Sevrain Amandine

    2012-01-01

    Full Text Available Abstract Background Pelvic incidence, sacral slope and slip percentage have been shown to be important predicting factors for assessing the risk of progression of low- and high-grade spondylolisthesis. Biomechanical factors, which affect the stress distribution and the mechanisms involved in the vertebral slippage, may also influence the risk of progression, but they are still not well known. The objective was to biomechanically evaluate how geometric sacral parameters influence shear and normal stress at the lumbosacral junction in spondylolisthesis. Methods A finite element model of a low-grade L5-S1 spondylolisthesis was constructed, including the morphology of the spine, pelvis and rib cage based on measurements from biplanar radiographs of a patient. Variations provided on this model aimed to study the effects on low grade spondylolisthesis as well as reproduce high grade spondylolisthesis. Normal and shear stresses at the lumbosacral junction were analyzed under various pelvic incidences, sacral slopes and slip percentages. Their influence on progression risk was statistically analyzed using a one-way analysis of variance. Results Stresses were mainly concentrated on the growth plate of S1, on the intervertebral disc of L5-S1, and ahead the sacral dome for low grade spondylolisthesis. For high grade spondylolisthesis, more important compression and shear stresses were seen in the anterior part of the growth plate and disc as compared to the lateral and posterior areas. Stress magnitudes over this area increased with slip percentage, sacral slope and pelvic incidence. Strong correlations were found between pelvic incidence and the resulting compression and shear stresses in the growth plate and intervertebral disc at the L5-S1 junction. Conclusions Progression of the slippage is mostly affected by a movement and an increase of stresses at the lumbosacral junction in accordance with spino-pelvic parameters. The statistical results provide

  6. [Posterior cortical atrophy].

    Science.gov (United States)

    Solyga, Volker Moræus; Western, Elin; Solheim, Hanne; Hassel, Bjørnar; Kerty, Emilia

    2015-06-02

    Posterior cortical atrophy is a neurodegenerative condition with atrophy of posterior parts of the cerebral cortex, including the visual cortex and parts of the parietal and temporal cortices. It presents early, in the 50s or 60s, with nonspecific visual disturbances that are often misinterpreted as ophthalmological, which can delay the diagnosis. The purpose of this article is to present current knowledge about symptoms, diagnostics and treatment of this condition. The review is based on a selection of relevant articles in PubMed and on the authors' own experience with the patient group. Posterior cortical atrophy causes gradually increasing impairment in reading, distance judgement, and the ability to perceive complex images. Examination of higher visual functions, neuropsychological testing, and neuroimaging contribute to diagnosis. In the early stages, patients do not have problems with memory or insight, but cognitive impairment and dementia can develop. It is unclear whether the condition is a variant of Alzheimer's disease, or whether it is a separate disease entity. There is no established treatment, but practical measures such as the aid of social care workers, telephones with large keypads, computers with voice recognition software and audiobooks can be useful. Currently available treatment has very limited effect on the disease itself. Nevertheless it is important to identify and diagnose the condition in its early stages in order to be able to offer patients practical assistance in their daily lives.

  7. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    Directory of Open Access Journals (Sweden)

    S. Pande

    2015-04-01

    Full Text Available This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting and its simplified version SIXPAR (Six Parameter Model, are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  8. Outlier-Tolerance RML Identification of Parameters in CAR Model

    Directory of Open Access Journals (Sweden)

    Hong Teng-teng

    2016-10-01

    Full Text Available The measured data inevitably contain abnormal data under the normal operating conditions. Most of the existing algorithms, such as least squares identification and maximum likelihood estimation, are easily affected by abnormal data and appear large indentation deviation. It is a difficult task needed to be addressed that how to improve the sensitivity of the existing algorithm or build a new parameter identifying algorithm with outlier-tolerance ability to abnormal data in system identification technology application. In this paper, the sensitivity of the RML to the sampled abnormal data was analyzed and a new improvement algorithm of CAR process is established to improve outlier-tolerance ability of the RML identification when there are outliers in the sampling series. The improved algorithm not only effectively inhibits the negative impact of the abnormal data but also effectively improve the quality of the parameter identification results. Some simulation given in this paper shows that the improved RML algorithm has strong outlier-tolerance. This paper’s research results play an important role in engineering control, signal processing, industrial automation and aerospace or other fields.

  9. Model parameters conditioning on regional hydrologic signatures for process-based design flood estimation in ungauged basins.

    Science.gov (United States)

    Biondi, Daniela; De Luca, Davide Luciano

    2015-04-01

    The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision

  10. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    Science.gov (United States)

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  11. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    Science.gov (United States)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-10-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  12. Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

    DEFF Research Database (Denmark)

    Nystrup, Peter; Madsen, Henrik; Lindström, Erik

    2016-01-01

    estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact...... to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations....

  13. When the optimal is not the best: parameter estimation in complex biological models.

    Directory of Open Access Journals (Sweden)

    Diego Fernández Slezak

    Full Text Available BACKGROUND: The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. RESULTS: We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. CONCLUSIONS: The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally.

  14. Parameter study of a model for NOx emissions from PFBC

    DEFF Research Database (Denmark)

    Jensen, Anker Degn; Johnsson, Jan Erik

    1996-01-01

    Simulations with a mathematical model of a pressurized bubbling fluidized bed combustor (PFBC) combined with a kinetic model for NO formation and reduction are presented and discussed. The kinetic model for NO formation and reduction considers NO and NH3 as the fixed nitrogen species, and includes...... homogeneous reactions and heterogeneous reactions catalyzed by bed material and char. Simulations of the influence of operating conditions: air staging, load, temperature, fuel particle size, bed particle size and bed inventory on the NO emission is presented and the trends are compared to experimental data...

  15. Relaxation oscillation model of hemodynamic parameters in the cerebral vessels

    Science.gov (United States)

    Cherevko, A. A.; Mikhaylova, A. V.; Chupakhin, A. P.; Ufimtseva, I. V.; Krivoshapkin, A. L.; Orlov, K. Yu

    2016-06-01

    Simulation of a blood flow under normality as well as under pathology is extremely complex problem of great current interest both from the point of view of fundamental hydrodynamics, and for medical applications. This paper proposes a model of Van der Pol - Duffing nonlinear oscillator equation describing relaxation oscillations of a blood flow in the cerebral vessels. The model is based on the patient-specific clinical experimental data flow obtained during the neurosurgical operations in Meshalkin Novosibirsk Research Institute of Circulation Pathology. The stability of the model is demonstrated through the variations of initial data and coefficients. It is universal and describes pressure and velocity fluctuations in different cerebral vessels (arteries, veins, sinuses), as well as in a laboratory model of carotid bifurcation. Derived equation describes the rheology of the ”blood stream - elastic vessel wall gelatinous brain environment” composite system and represents the state equation of this complex environment.

  16. Oxidative Stress and Light-Evoked Responses of the Posterior Segment in a Mouse Model of Diabetic Retinopathy

    Science.gov (United States)

    Berkowitz, Bruce A.; Grady, Edmund Michael; Khetarpal, Nikita; Patel, Akshar; Roberts, Robin

    2015-01-01

    Purpose. To test the hypothesis that in a mouse model of diabetic retinopathy, oxidative stress is linked with impaired light-evoked expansion of choroidal thickness and subretinal space (SRS). Methods. We examined nondiabetic mice (wild-type, wt) with and without administration of manganese, nondiabetic mice deficient in rod phototransduction (transducin alpha knockout; GNAT1−/−), and diabetic mice (untreated or treated with the antioxidant α-lipoic acid [LPA]). Magnetic resonance imaging (MRI) was used to measure light-evoked increases in choroidal thickness and the apparent diffusion coefficient (ADC) at 88% to 100% depth into the retina (i.e., the SRS layer). Results. Choroidal thickness values were similar (P > 0.05) between all untreated nondiabetic dark-adapted groups and increased significantly (P 0.05). In diabetic mice, the light-dependent increase in SRS ADC was significantly (P choroid thickness and its light-evoked expansion together with phototransduction-dependent changes in the SRS layer in mice in vivo. Because ADC MRI exploits an endogenous contrast mechanism, its translational potential is promising; it can also be performed in concert with manganese-enhanced MRI (MEMRI). Our data support a link between diabetes-related oxidative stress and rod, but not choroidal, pathophysiology. PMID:25574049

  17. A Model of Emergent Category-specific Activation in the Posterior Fusiform Gyrus of Sighted and Congenitally Blind Populations.

    Science.gov (United States)

    Chen, Lang; Rogers, Timothy T

    2015-10-01

    Theories about the neural bases of semantic knowledge tend between two poles, one proposing that distinct brain regions are innately dedicated to different conceptual domains and the other suggesting that all concepts are encoded within a single network. Category-sensitive functional activations in the fusiform cortex of the congenitally blind have been taken to support the former view but also raise several puzzles. We use neural network models to assess a hypothesis that spans the two poles: The interesting functional activation patterns reflect the base connectivity of a domain-general semantic network. Both similarities and differences between sighted and congenitally blind groups can emerge through learning in a neural network, but only in architectures adopting real anatomical constraints. Surprisingly, the same constraints suggest a novel account of a quite different phenomenon: the dyspraxia observed in patients with semantic impairments from anterior temporal pathology. From this work, we suggest that the cortical semantic network is wired not to encode knowledge of distinct conceptual domains but to promote learning about both conceptual and affordance structure in the environment.

  18. Fatigue reliability based on residual strength model with hybrid uncertain parameters

    Institute of Scientific and Technical Information of China (English)

    Jun Wang; Zhi-Ping Qiu

    2012-01-01

    The aim of this paper is to evaluate the fatigue reliability with hybrid uncertain parameters based on a residual strength model.By solving the non-probabilistic setbased reliability problem and analyzing the reliability with randomness,the fatigue reliability with hybrid parameters can be obtained.The presented hybrid model can adequately consider all uncertainties affecting the fatigue reliability with hybrid uncertain parameters.A comparison among the presented hybrid model,non-probabilistic set-theoretic model and the conventional random model is made through two typical numerical examples.The results show that the presented hybrid model,which can ensure structural security,is effective and practical.

  19. Estimation of the scale parameter of gamma model in presence of outlier observations

    Directory of Open Access Journals (Sweden)

    M. E. Ghitany

    1990-01-01

    Full Text Available This paper considers the Bayesian point estimation of the scale parameter for a two-parameter gamma life-testing model in presence of several outlier observations in the data. The Bayesian analysis is carried out under the assumption of squared error loss function and fixed or random shape parameter.

  20. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Mathematical modeling to reconstruct Elastic and geoelectrical parameters

    Directory of Open Access Journals (Sweden)

    Y. V. Kiselev

    2002-06-01

    Full Text Available The monitoring of the underground medium requires estimation of accuracy of the methods used. Numerical simulation of the solution of 2D inverse problem on the reconstruction of seismic and electrical parameters of local (comparable in size with the wavelength inhomogeneities by the diffraction tomography method based upon the first order Born approximation is considered. The direct problems for the Lame and Maxwell equations are solved by the finite difference method that allows us to take correctly into account the diffraction phenomenon produced by the target inhomogeneities with simple and complex geometry. For reconstruction of the local inhomogeneities the algebraic methods and the optimizing procedures are used. The investigation includes a parametric representation of inhomogeneities by the simple and complex functions. The results of estimation of the accuracy of the reconstruction of elastic inhomogeneities and inhomogeneities of electrical conductivity by the diffraction tomography method are represented.

  2. Expression of Livin in animal model of posterior capsule opacification%Livin 在后发性白内障动物模型中的表达

    Institute of Scientific and Technical Information of China (English)

    刘淑君; 赵桂秋; 李元彬; 蔄雪静; 王文亭; 张振华

    2013-01-01

    AIM: To establish animal models of posterior capsule opacification ( PCO ) in New Zealand white rabbits and detect the expression of the Livin in PCO tissue. METHODS:Thirty healthy adult New Zealand white rabbits were randomly divided into experimental group and control group.Ultrasonic phacoemulsifications were performed in the 25 experiment rabbits under intramuscular injection anesthesia.The rabbits'eyes were examined by slit lamp microscope to observe the development of PCO after surgery and at 3d, 7d, 14d and 28d.The 5 rabbits in control group were executed to take the posterior capsule of right eyes.Reverse transcription polymerase ( RT -PCR ) and western -blotting were performed to detect the expression of Livin in PCO at different time points postoperatively. RESULTS:Both RT-PCR and western-blotting method indicated that different levels of expression of Livin could be detected in the tissue of PCO in group B, C, D, E except control group and group A that instantly after surgery.The two methods indicated that Livin reached the peak in group C and decreased in group D, lower in group E and in group B reached the bottom. CONCLUSION:The expression of Livin can be detected in the tissue of PCO in a certain similar time.The study indicated that Livin correlates with the pathogenesis of PCO.It may provide a novel tool for the investigation of gene therapy for PCO.%目的:建立新西兰大白兔后发性白内障( posterior capsule opacification ,PCO)动物模型,检测凋亡抑制因子Livin在PCO中的表达,从而探讨 Livin 在 PCO 形成过程中的作用。  方法:选用健康新西兰大白兔30只,随机分为实验组(25只),分为A,B,C,D,E组,对照组(5只),实验组行右眼晶状体皮质吸出,分别于术后即刻;3,7,14,28 d (每个时间点5只)对术眼行裂隙灯显微镜检查,并处死动物获取术眼晶状体后囊膜,对照组直接处死动物获取右眼后囊膜,采用

  3. Auxiliary proteins that facilitate formation of collagen-rich deposits in the posterior knee capsule in a rabbit-based joint contracture model.

    Science.gov (United States)

    Steplewski, Andrzej; Fertala, Jolanta; Beredjiklian, Pedro K; Abboud, Joseph A; Wang, Mark L Y; Namdari, Surena; Barlow, Jonathan; Rivlin, Michael; Arnold, William V; Kostas, James; Hou, Cheryl; Fertala, Andrzej

    2016-03-01

    Post-traumatic joint contracture is a debilitating consequence of trauma or surgical procedures. It is associated with fibrosis that develops regardless of the nature of initial trauma and results from complex biological processes associated with inflammation and cell activation. These processes accelerate production of structural elements of the extracellular matrix, particularly collagen fibrils. Although the increased production of collagenous proteins has been demonstrated in tissues of contracted joints, researchers have not yet determined the complex protein machinery needed for the biosynthesis of collagen molecules and for their assembly into fibrils. Consequently, the purpose of our study was to investigate key enzymes and protein chaperones needed to produce collagen-rich deposits. Using a rabbit model of joint contracture, our biochemical and histological assays indicated changes in the expression patterns of heat shock protein 47 and the α-subunit of prolyl 4-hydroxylase, key proteins in processing nascent collagen chains. Moreover, our study shows that the abnormal organization of collagen fibrils in the posterior capsules of injured knees, rather than excessive formation of fibril-stabilizing cross-links, may be a key reason for observed changes in the mechanical characteristics of injured joints. This result sheds new light on pathomechanisms of joint contraction, and identifies potentially attractive anti-fibrotic targets. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  4. Heat Shock Protein Beta-1 Modifies Anterior to Posterior Purkinje Cell Vulnerability in a Mouse Model of Niemann-Pick Type C Disease.

    Directory of Open Access Journals (Sweden)

    Chan Chung

    2016-05-01

    Full Text Available Selective neuronal vulnerability is characteristic of most degenerative disorders of the CNS, yet mechanisms underlying this phenomenon remain poorly characterized. Many forms of cerebellar degeneration exhibit an anterior-to-posterior gradient of Purkinje cell loss including Niemann-Pick type C1 (NPC disease, a lysosomal storage disorder characterized by progressive neurological deficits that often begin in childhood. Here, we sought to identify candidate genes underlying vulnerability of Purkinje cells in anterior cerebellar lobules using data freely available in the Allen Brain Atlas. This approach led to the identification of 16 candidate neuroprotective or susceptibility genes. We demonstrate that one candidate gene, heat shock protein beta-1 (HSPB1, promoted neuronal survival in cellular models of NPC disease through a mechanism that involved inhibition of apoptosis. Additionally, we show that over-expression of wild type HSPB1 or a phosphomimetic mutant in NPC mice slowed the progression of motor impairment and diminished cerebellar Purkinje cell loss. We confirmed the modulatory effect of Hspb1 on Purkinje cell degeneration in vivo, as knockdown by Hspb1 shRNA significantly enhanced neuron loss. These results suggest that strategies to promote HSPB1 activity may slow the rate of cerebellar degeneration in NPC disease and highlight the use of bioinformatics tools to uncover pathways leading to neuronal protection in neurodegenerative disorders.

  5. Corruption of parameter behavior and regionalization by model and forcing data errors: A Bayesian example using the SNOW17 model

    Science.gov (United States)

    He, Minxue; Hogue, Terri S.; Franz, Kristie J.; Margulis, Steven A.; Vrugt, Jasper A.

    2011-07-01

    The current study evaluates the impacts of various sources of uncertainty involved in hydrologic modeling on parameter behavior and regionalization utilizing different Bayesian likelihood functions and the Differential Evolution Adaptive Metropolis (DREAM) algorithm. The developed likelihood functions differ in their underlying assumptions and treatment of error sources. We apply the developed method to a snow accumulation and ablation model (National Weather Service SNOW17) and generate parameter ensembles to predict snow water equivalent (SWE). Observational data include precipitation and air temperature forcing along with SWE measurements from 24 sites with diverse hydroclimatic characteristics. A multiple linear regression model is used to construct regionalization relationships between model parameters and site characteristics. Results indicate that model structural uncertainty has the largest influence on SNOW17 parameter behavior. Precipitation uncertainty is the second largest source of uncertainty, showing greater impact at wetter sites. Measurement uncertainty in SWE tends to have little impact on the final model parameters and resulting SWE predictions. Considering all sources of uncertainty, parameters related to air temperature and snowfall fraction exhibit the strongest correlations to site characteristics. Parameters related to the length of the melting period also show high correlation to site characteristics. Finally, model structural uncertainty and precipitation uncertainty dramatically alter parameter regionalization relationships in comparison to cases where only uncertainty in model parameters or output measurements is considered. Our results demonstrate that accurate treatment of forcing, parameter, model structural, and calibration data errors is critical for deriving robust regionalization relationships.

  6. A multivariate random-parameters Tobit model for analyzing highway crash rates by injury severity.

    Science.gov (United States)

    Zeng, Qiang; Wen, Huiying; Huang, Helai; Pei, Xin; Wong, S C

    2017-02-01

    In this study, a multivariate random-parameters Tobit model is proposed for the analysis of crash rates by injury severity. In the model, both correlation across injury severity and unobserved heterogeneity across road-segment observations are accommodated. The proposed model is compared with a multivariate (fixed-parameters) Tobit model in the Bayesian context, by using a crash dataset collected from the Traffic Information System of Hong Kong. The dataset contains crash, road geometric and traffic information on 224 directional road segments for a five-year period (2002-2006). The multivariate random-parameters Tobit model provides a much better fit than its fixed-parameters counterpart, according to the deviance information criteria and Bayesian R(2), while it reveals a higher correlation between crash rates at different severity levels. The parameter estimates show that a few risk factors (bus stop, lane changing opportunity and lane width) have heterogeneous effects on crash-injury-severity rates. For the other factors, the variances of their random parameters are insignificant at the 95% credibility level, then the random parameters are set to be fixed across observations. Nevertheless, most of these fixed coefficients are estimated with higher precisions (i.e., smaller variances) in the random-parameters model. Thus, the random-parameters Tobit model, which provides a more comprehensive understanding of the factors' effects on crash rates by injury severity, is superior to the multivariate Tobit model and should be considered a good alternative for traffic safety analysis.

  7. Measuring the basic parameters of neutron stars using model atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Suleimanov, V.F. [Universitaet Tuebingen, Institut fuer Astronomie und Astrophysik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany); Kazan Federal University, Kazan (Russian Federation); Poutanen, J. [University of Turku, Tuorla Observatory, Department of Physics and Astronomy, Piikkioe (Finland); KTH Royal Institute of Technology and Stockholm University, Nordita, Stockholm (Sweden); Klochkov, D.; Werner, K. [Universitaet Tuebingen, Institut fuer Astronomie und Astrophysik, Kepler Center for Astro and Particle Physics, Tuebingen (Germany)

    2016-02-15

    Model spectra of neutron star atmospheres are nowadays widely used to fit the observed thermal X-ray spectra of neutron stars. This fitting is the key element in the method of the neutron star radius determination. Here, we present the basic assumptions used for the neutron star atmosphere modeling as well as the main qualitative features of the stellar atmospheres leading to the deviations of the emergent model spectrum from blackbody. We describe the properties of two of our model atmosphere grids: i) pure carbon atmospheres for relatively cool neutron stars (1-4MK) and ii) hot atmospheres with Compton scattering taken into account. The results obtained by applying these grids to model the X-ray spectra of the central compact object in supernova remnant HESS 1731-347, and two X-ray bursting neutron stars in low-mass X-ray binaries, 4U 1724-307 and 4U 1608-52, are presented. Possible systematic uncertainties associated with the obtained neutron star radii are discussed. (orig.)

  8. Measuring the basic parameters of neutron stars using model atmospheres

    CERN Document Server

    Suleimanov, V F; Klochkov, D; Werner, K

    2015-01-01

    Model spectra of neutron star atmospheres are nowadays widely used to fit the observed thermal X-ray spectra of neutron stars. This fitting is the key element in the method of the neutronstar radius determination. Here, we present the basic assumptions used for the neutron star atmosphere modeling as well as the main qualitative features of the stellar atmospheres leading to the deviations of the emergent model spectrum from blackbody. We describe the properties of two of our model atmosphere grids: (i) pure carbon atmospheres for relatively cool neutron stars (1--4 MK) and (ii) hot atmospheres with Compton scattering taken into account. The results obtained by applying these grids to model the X-ray spectra of the central compact object in supernova remnant HESS 1731-347, and two X-ray bursting neutron stars in low-mass X-ray binaries, 4U 1724-307 and 4U 1608-52, are presented. Possible systematic uncertainties associated with the obtained neutron star radii are discussed.

  9. Multi-objective global sensitivity analysis of the WRF model parameters

    Science.gov (United States)

    Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen

    2015-04-01

    Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.

  10. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Directory of Open Access Journals (Sweden)

    Jesse Whittington

    Full Text Available Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071 for females, 0.844 (0.703-0.975 for males, and 0.882 (0.779-0.981 for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024 for females, 0.825 (0.700-0.948 for males, and 0.863 (0.771-0.957 for both sexes. The combination of low densities, low reproductive rates, and predominantly negative

  11. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Science.gov (United States)

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth

  12. Parameter estimation method for improper fractional models and its application to molecular biological systems.

    Science.gov (United States)

    Tian, Li-Ping; Liu, Lizhi; Wu, Fang-Xiang

    2010-01-01

    Derived from biochemical principles, molecular biological systems can be described by a group of differential equations. Generally these differential equations contain fractional functions plus polynomials (which we call improper fractional model) as reaction rates. As a result, molecular biological systems are nonlinear in both parameters and states. It is well known that it is challenging to estimate parameters nonlinear in a model. However, in fractional functions both the denominator and numerator are linear in the parameters while polynomials are also linear in parameters. Based on this observation, we develop an iterative linear least squares method for estimating parameters in biological systems modeled by improper fractional functions. The basic idea is to transfer optimizing a nonlinear least squares objective function into iteratively solving a sequence of linear least squares problems. The developed method is applied to the estimation of parameters in a metabolism system. The simulation results show the superior performance of the proposed method for estimating parameters in such molecular biological systems.

  13. IDENTIFYING THE PARAMETERS OF THE MATHEMATICAL EXPENDITURE SYSTEM MODEL

    Directory of Open Access Journals (Sweden)

    ANA-PETRINA PĂUN

    2013-10-01

    Full Text Available This chapter describes an optimum regulation model for the public expenditures system in Romania. The aim of this work is to design an optimal control system of public expenditures in Romania. It contains an offline identification of the total public expenditures system in Romania for a timespan of 15 years. The total public expenditures system is a MISO type one (Multiple Input – Single Output and is identified by the use of the lowest foursquare applied on an OE (Output Error type model.

  14. Effects of model schematisation, geometry and parameter values on urban flood modelling.

    Science.gov (United States)

    Vojinovic, Z; Seyoum, S D; Mwalwaka, J M; Price, R K

    2011-01-01

    One-dimensional (1D) hydrodynamic models have been used as a standard industry practice for urban flood modelling work for many years. More recently, however, model formulations have included a 1D representation of the main channels and a 2D representation of the floodplains. Since the physical process of describing exchanges of flows with the floodplains can be represented in different ways, the predictive capability of different modelling approaches can also vary. The present paper explores effects of some of the issues that concern urban flood modelling work. Impacts from applying different model schematisation, geometry and parameter values were investigated. The study has mainly focussed on exploring how different Digital Terrain Model (DTM) resolution, presence of different features on DTM such as roads and building structures and different friction coefficients affect the simulation results. Practical implications of these issues are analysed and illustrated in a case study from St Maarten, N.A. The results from this study aim to provide users of numerical models with information that can be used in the analyses of flooding processes in urban areas.

  15. Water quality modelling for ephemeral rivers: Model development and parameter assessment

    Science.gov (United States)

    Mannina, Giorgio; Viviani, Gaspare

    2010-11-01

    SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to

  16. Varying parameter models to accommodate dynamic promotion effects

    NARCIS (Netherlands)

    Foekens, E.W.; Leeflang, P.S.H.; Wittink, D.R.

    1999-01-01

    The purpose of this paper is to examine the dynamic effects of sales promotions. We create dynamic brand sales models (for weekly store-level scanner data) by relating store intercepts and a brand's own price elasticity to a measure of the cumulated previous price discounts - amount and time - for t

  17. Dynamics of 'abc' and 'qd' constant parameters induction generator model

    DEFF Research Database (Denmark)

    Fajardo-R, L.A.; Medina, A.; Iov, F.

    2009-01-01

    In this paper, parametric sensibility effects on dynamics of the induction generator in the presence of local perturbations are investigated. The study is conducted in a 3x2 MW wind park dealing with abc, qd0 and qd reduced order, induction generator model respectively, and with fluxes as state v...

  18. Parameter estimation and uncertainty assessment in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena

    En rationel og effektiv vandressourceadministration forudsætter indsigt i og forståelse af de hydrologiske processer samt præcise opgørelser af de tilgængelige vandmængder i både overfladevands- og grundvandsmagasiner. Til det formål er hydrologiske modeller et uomgængeligt værktøj. I de senest 10......-20 år er der forsket meget i hydrologiske processer og især i implementeringen af denne viden i numeriske modelsystemer. Dette har ledt til modeller af stigende kompleksitet. Samtidig er en række forskellige teknikker til at estimere modelparametre og til at skønne usikkerheden på modelprædiktioner...... hertil har været de lange beregningstider og omfattende datakrav, der karakteriserer denne type modeller, og som udgør et stort problem ved rekursiv anvendelse af modellerne. Dertil kommer, at de komplekse modeller sædvanligvis ikke er frit tilgængelige på samme måde som de simple nedbør...

  19. An approach to measure parameter sensitivity in watershed hydrological modelling

    Science.gov (United States)

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  20. Continuum model for masonry: Parameter estimation and validation

    NARCIS (Netherlands)

    Lourenço, P.B.; Rots, J.G.; Blaauwendraad, J.

    1998-01-01

    A novel yield criterion that includes different strengths along each material axis is presented. The criterion includes two different fracture energies in tension and two different fracture energies in compression. The ability of the model to represent the inelastic behavior of orthotropic materials