Relevant parameters in models of cell division control
Grilli, Jacopo; Osella, Matteo; Kennard, Andrew S.; Lagomarsino, Marco Cosentino
2017-03-01
A recent burst of dynamic single-cell data makes it possible to characterize the stochastic dynamics of cell division control in bacteria. Different models were used to propose specific mechanisms, but the links between them are poorly explored. The lack of comparative studies makes it difficult to appreciate how well any particular mechanism is supported by the data. Here, we describe a simple and generic framework in which two common formalisms can be used interchangeably: (i) a continuous-time division process described by a hazard function and (ii) a discrete-time equation describing cell size across generations (where the unit of time is a cell cycle). In our framework, this second process is a discrete-time Langevin equation with simple physical analogues. By perturbative expansion around the mean initial size (or interdivision time), we show how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well as the constant added size mechanism recently found to capture several aspects of the cell division behavior of different bacteria. As we show by analytical estimates and numerical simulations, the available data are described precisely by the first-order approximation of this expansion, i.e., by a "linear response" regime for the correction of size fluctuations. Hence, a single dimensionless parameter defines the strength and action of the division control against cell-to-cell variability (quantified by a single "noise" parameter). However, the same strength of linear response may emerge from several mechanisms, which are distinguished only by higher-order terms in the perturbative expansion. Our analytical estimate of the sample size needed to distinguish between second-order effects shows that this value is close to but larger than the values of the current datasets. These results provide a unified framework for future studies and clarify the relevant parameters at play in the control of
Relevant parameters in models of cell division control
Grilli, Jacopo; Kennard, Andrew S; Lagomarsino, Marco Cosentino
2016-01-01
A recent burst of dynamic single-cell growth-division data makes it possible to characterize the stochastic dynamics of cell division control in bacteria. Different modeling frameworks were used to infer specific mechanisms from such data, but the links between frameworks are poorly explored, with relevant consequences for how well any particular mechanism can be supported by the data. Here, we describe a simple and generic framework in which two common formalisms can be used interchangeably: (i) a continuous-time division process described by a hazard function and (ii) a discrete-time equation describing cell size across generations (where the unit of time is a cell cycle). In our framework, this second process is a discrete-time Langevin equation with a simple physical analogue. By perturbative expansion around the mean initial size (or inter-division time), we show explicitly how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well a...
Boscá, A., E-mail: alberto.bosca@upm.es [Instituto de Sistemas Optoelectrónicos y Microtecnología, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Pedrós, J. [Instituto de Sistemas Optoelectrónicos y Microtecnología, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Campus de Excelencia Internacional, Campus Moncloa UCM-UPM, Madrid 28040 (Spain); Martínez, J. [Instituto de Sistemas Optoelectrónicos y Microtecnología, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Dpto. de Ciencia de Materiales, E.T.S.I de Caminos, Canales y Puertos, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Calle, F. [Instituto de Sistemas Optoelectrónicos y Microtecnología, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040 (Spain); Campus de Excelencia Internacional, Campus Moncloa UCM-UPM, Madrid 28040 (Spain)
2015-01-28
Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method output values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.
Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.
2016-03-01
Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.
Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.
1987-09-01
This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
Verhave, P.S.; Jongsma, M.J.; Berg, R.M. van den; Vanwersch, R.A.P.; Smit, A.B.; Philippens, I.H.C.H.M.
2012-01-01
The present study evaluates neuroprotection in a marmoset MPTP (1-methyl-1,2,3,6-tetrahydropyridine) model representing early Parkinson's disease (PD). The anti-glutamatergic compound riluzole is used as a model compound for neuroprotection. The compound is one of the few protective compounds used i
Lei Hu
2015-01-01
Full Text Available Rotational speed and load usually change when rotating machinery works. Both this kind of changing operational conditions and machine fault could make the mechanical vibration characteristics change. Therefore, effective health monitoring method for rotating machinery must be able to adjust during the change of operational conditions. This paper presents an adaptive threshold model for the health monitoring of bearings under changing operational conditions. Relevance vector machines (RVMs are used for regression of the relationships between the adaptive parameters of the threshold model and the statistical characteristics of vibration features. The adaptive threshold model is constructed based on these relationships. The health status of bearings can be indicated via detecting whether vibration features exceed the adaptive threshold. This method is validated on bearings running at changing speeds. The monitoring results show that this method is effective as long as the rotational speed is higher than a relative small value.
Verma, Dinkar, E-mail: dinkar@iitk.ac.in [Nuclear Engineering and Technology Program, Indian Institute of Technology Kanpur, Kanpur 208 016 (India); Kalra, Manjeet Singh, E-mail: drmanjeet.singh@dituniversity.edu.in [DIT University, Dehradun 248 009 (India); Wahi, Pankaj, E-mail: wahi@iitk.ac.in [Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208 016 (India)
2017-04-15
Highlights: • A simplified model with nonlinear void reactivity feedback is studied. • Method of multiple scales for nonlinear analysis and oscillation characteristics. • Second order void reactivity dominates in determining system dynamics. • Opposing signs of linear and quadratic void reactivity enhances global safety. - Abstract: In the present work, the effect of nonlinear void reactivity on the dynamics of a simplified lumped-parameter model for a boiling water reactor (BWR) is investigated. A mathematical model of five differential equations comprising of neutronics and thermal-hydraulics encompassing the nonlinearities associated with both the reactivity feedbacks and the heat transfer process has been used. To this end, we have considered parameters relevant to RBMK for which the void reactivity is known to be nonlinear. A nonlinear analysis of the model exploiting the method of multiple time scales (MMTS) predicts the occurrence of the two types of Hopf bifurcation, namely subcritical and supercritical, leading to the evolution of limit cycles for a range of parameters. Numerical simulations have been performed to verify the analytical results obtained by MMTS. The study shows that the nonlinear reactivity has a significant influence on the system dynamics. A parametric study with varying nominal reactor power and operating conditions in coolant channel has also been performed which shows the effect of change in concerned parameter on the boundary between regions of sub- and super-critical Hopf bifurcations in the space constituted by the two coefficients of reactivities viz. the void and the Doppler coefficient of reactivities. In particular, we find that introduction of a negative quadratic term in the void reactivity feedback significantly increases the supercritical region and dominates in determining the system dynamics.
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Doummar, Joanna; Sauter, Martin; Geyer, Tobias
2012-03-01
SummaryIn a complex environment such as karst systems, it is difficult to assess the relative contribution of the different components of the system to the hydrological system response, i.e. spring discharge. Not only is the saturated zone highly heterogeneous due to the presence of highly permeable conduits, but also the recharge processes. The latter are composed of rapid recharge components through shafts and solution channels and diffuse matrix infiltration, generating a highly complex, spatially and temporally variable input signal. The presented study reveals the importance of the compartments vegetation, soils, saturated zone and unsaturated zone. Therefore, the entire water cycle in the catchment area Gallusquelle spring (Southwest Germany) is modelled over a period of 10 years using the integrated hydrological modelling system Mike She by DHI (2007). Sensitivity analyses show that a few individual parameters, varied within physically plausible ranges, play an important role in reshaping the recessions and peaks of the recharge functions and consequently the spring discharge. Vegetation parameters especially the Leaf Area Index (LAI) and the root depth as well as empirical parameters in the relationship of Kristensen and Jensen highly influence evapotranspiration, transpiration to evaporation ratios and recharge respectively. In the unsaturated zone, the type of the soil (mainly the hydraulic conductivity at saturation in the water retention and hydraulic retention curves) has an effect on the infiltration/evapotranspiration and recharge functions. Additionally in the unsaturated karst, the saturated moisture content is considered as a highly indicative parameter as it significantly affects the peaks and recessions of the recharge curve. At the level of the saturated zone the hydraulic conductivity of the matrix and highly conductive zone representing the conduit are dominant parameters influencing the spring response. Other intermediate significant
Ibsen, Lars Bo; Liingaard, Morten
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. The lumped-parameter model development have been reported by (Wolf 1991b; Wolf 1991a; Wolf and Paronesso 1991; Wolf and Paronesso 19...
Dynamics of soil parameters relevant for humanitarian demining
Obhodas, Jasmina [Institute Ruder Boskovic, Department of Experimental Physics, Bijenicka c. 54, P.O. Box 180, 10000 Zagreb (Croatia); Vdovic, Neda [Institute Ruder Boskovic, Department of Experimental Physics, Bijenicka c. 54, P.O. Box 180, 10000 Zagreb (Croatia); Valkovic, Vlado [Institute Ruder Boskovic, Department of Experimental Physics, Bijenicka c. 54, P.O. Box 180, 10000 Zagreb (Croatia)]. E-mail: valkovic@irb.hr
2005-12-15
In this paper we analyzed characteristics of 6 different soils from the test field at the Ruder Boskovic Institute. Many soil properties relevant for the performance of humanitarian demining tools strongly depend on water content. This is an effort to understand better the soil moisture variability and to find soil parameters that can predict the water content regarding the weather conditions. Such knowledge will allow to optimize demining operations. To gather the main parameters like field capacity, rate and delay of water infiltration and soil water retention which are all related to soil texture, daily time-series of soil moisture from August to November 2001, where analyzed.
Goal relevance as a quantitative model of human task relevance.
Tanner, James; Itti, Laurent
2017-03-01
The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Zhang, Jing; Wang, Chenchen; Ji, Li; Liu, Weiping
2016-05-16
According to the electrophilic theory in toxicology, many chemical carcinogens in the environment and/or their active metabolites are electrophiles that exert their effects by forming covalent bonds with nucleophilic DNA centers. The theory of hard and soft acids and bases (HSAB), which states that a toxic electrophile reacts preferentially with a biological macromolecule that has a similar hardness or softness, clarifies the underlying chemistry involved in this critical event. Epoxides are hard electrophiles that are produced endogenously by the enzymatic oxidation of parent chemicals (e.g., alkenes and PAHs). Epoxide ring opening proceeds through a SN2-type mechanism with hard nucleophile DNA sites as the major facilitators of toxic effects. Thus, the quantitative prediction of chemical reactivity would enable a predictive assessment of the molecular potential to exert electrophile-mediated toxicity. In this study, we calculated the activation energies for reactions between epoxides and the guanine N7 site for a diverse set of epoxides, including aliphatic epoxides, substituted styrene oxides, and PAH epoxides, using a state-of-the-art density functional theory (DFT) method. It is worth noting that these activation energies for diverse epoxides can be further predicted by quantum chemically calculated nucleophilic indices from HSAB theory, which is a less computationally demanding method than the exacting procedure for locating the transition state. More importantly, the good qualitative/quantitative correlations between the chemical reactivity of epoxides and their bioactivity suggest that the developed model based on HSAB theory may aid in the predictive hazard evaluation of epoxides, enabling the early identification of mutagenicity/carcinogenicity-relevant SN2 reactivity.
Measuring Biological Parameters in Rivers: Relevance of the Spatial Scale
Romani, A. M.
2009-07-01
The analyses of biological parameters in river ecosystems have been traditionally used as indicative of water quality with the advantage over chemical or physical analyses that they integrate the effects of punctual as well as long term effects. However, analyses of biological parameters (such as biomass and metabolism) performed at different spatial scales (from the microbial communities to the whole river) inform about different key processes. At the finer scale, microbial interactions and the structure of the microbial community (biofilm microbial biomass, three dimensional structure, and relevance of polysaccharide matrix) can be detected. At the reach scale, the different stream bed substrate (sediment, rocks, and particulate organic matter accumulation) are shown to play differential and specific roles on the processing of organic and inorganic materials in the flowing water. (Author)
Dynamics of soil parameters relevant for humanitarian demining
Obhođaš, Jasmina; Vdović, Neda; Valković, Vlado
2005-12-01
In this paper we analyzed characteristics of 6 different soils from the test field at the Ruđer Bošković Institute. Many soil properties relevant for the performance of humanitarian demining tools strongly depend on water content. This is an effort to understand better the soil moisture variability and to find soil parameters that can predict the water content regarding the weather conditions. Such knowledge will allow to optimize demining operations. To gather the main parameters like field capacity, rate and delay of water infiltration and soil water retention which are all related to soil texture, daily time-series of soil moisture from August to November 2001, where analyzed.
Response model parameter linking
Barrett, Michelle Derbenwick
2015-01-01
With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of equating observed scores on different test forms. This thesis argues, however, that the use of item response models does not require
Distributed Parameter Modelling Applications
2011-01-01
Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers and the d......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...... sands processing. The fertilizer granulation model considers the dynamics of MAP-DAP (mono and diammonium phosphates) production within an industrial granulator, that involves complex crystallisation, chemical reaction and particle growth, captured through population balances. A final example considers...
Parameter Symmetry of the Interacting Boson Model
Shirokov, A M; Smirnov, Yu F; Shirokov, Andrey M.; Smirnov, Yu. F.
1998-01-01
We discuss the symmetry of the parameter space of the interacting boson model (IBM). It is shown that for any set of the IBM Hamiltonian parameters (with the only exception of the U(5) dynamical symmetry limit) one can always find another set that generates the equivalent spectrum. We discuss the origin of the symmetry and its relevance for physical applications.
Modelling Complex Relevance Spaces with Copulas
C. Eickhoff (Carsten); A.P. de Vries (Arjen)
2014-01-01
htmlabstractModern relevance models consider a wide range of criteria in order to identify those documents that are expected to satisfy the user's information need. With growing dimensionality of the underlying relevance spaces the need for sophisticated score combination and estimation schemes
Passage relevance models for genomics search
Frieder Ophir
2009-03-01
Full Text Available Abstract We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.
Hydrodynamic parameters of mesh fillers relevant to miniature regenerative cryocoolers
Landrum, E. C.; Conrad, T. J.; Ghiaasiaan, S. M.; Kirkconnell, Carl S.
2010-06-01
Directional hydrodynamic parameters of two fine-mesh porous materials that are suitable for miniature regenerative cryocoolers were studied under steady and oscillating flows of helium. These materials included stacked discs of #635 stainless steel (wire diameter of 20.3 μm) and #325 phosphor bronze (wire diameter of 35.6 μm) wire mesh screens, which are among the commercially available fillers for use in small-scale regenerators and heat exchangers, respectively. Experiments were performed in test sections in which pressure variations across these fillers, in the axial and lateral (radial) directions, were measured under steady and oscillatory flows. The directional permeability and Forchheimer's inertial coefficient were then obtained by using a Computational Fluid Dynamics (CFD)-assisted method. The oscillatory flow experiments covered a frequency range of 50-200 Hz. The results confirmed the importance of anisotropy in the mesh screen fillers, and indicated differences between the directional hydrodynamic resistance parameters for steady and oscillating flow regimes.
Electroconvulsive Therapy In Neuropsychiatry : Relevance Of Seizure Parameters
Gangadhar BN
2000-01-01
Full Text Available Electroconvulsive therapy (ECT is used to induce therapeutic seizures in various clinical conditions. It is specifically useful in depression, catatonia, patients with high suicidal risk, and those intolerant to drugs. Its beneficial effects surpass its side effects. Memory impairment is benign and transient. Its mechanism of action is unknown, though numerous neurotransmitters and neuroreceptors have been implicated. The standards of ECT practice are well established but still evolving in some particularly in unilateral ECT. Assessment of threshold by formula method may deliver higher stimulus dose compared with titration method. Cerebral seizure during ECT procedure is necessary. Motor (cuff method and EEG seizure monitoring are mandatory. Recent studies have shown some EEG parameters (amplitude, fractal dimension, symmetry, and post ictal suppression to be associated with therapeutic outcome. Besides seizure monitoring, measuring other physiological parameters such as heart rate (HR and blood pressure (BP may be useful indicators of therapeutic response. Use of ECT in neurological conditions as well as its application in psychiatric illnesses associated with neurological disorders has also been reviewed briefly.
Catalyst Deactivation: Control Relevance of Model Assumptions
Bernt Lie
2000-10-01
Full Text Available Two principles for describing catalyst deactivation are discussed, one based on the deactivation mechanism, the other based on the activity and catalyst age distribution. When the model is based upon activity decay, it is common to use a mean activity developed from the steady-state residence time distribution. We compare control-relevant properties of such an approach with those of a model based upon the deactivation mechanism. Using a continuous stirred tank reactor as an example, we show that the mechanistic approach and the population balance approach lead to identical models. However, common additional assumptions used for activity-based models lead to model properties that may deviate considerably from the correct one.
Photovoltaic module parameters acquisition model
Cibira, Gabriel; Koščová, Marcela
2014-09-01
This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.
Relevant phylogenetic invariants of evolutionary models
Casanellas, Marta
2009-01-01
Recently there have been several attempts to provide a whole set of generators of the ideal of the algebraic variety associated to a phylogenetic tree evolving under an algebraic model. These algebraic varieties have been proven to be useful in phylogenetics. In this paper we prove that, for phylogenetic reconstruction purposes, it is enough to consider generators coming from the edges of the tree, the so-called edge invariants. This is the algebraic analogous to Buneman's Splits Equivalence Theorem. The interest of this result relies on its potential applications in phylogenetics for the widely used evolutionary models such as Jukes-Cantor, Kimura 2 and 3 parameters, and General Markov models.
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
Automatic Relevance Determination for multi-way models
Mørup, Morten; Hansen, Lars Kai
2009-01-01
Estimating the adequate number of components is an important yet difficult problem in multi-way modelling. We demonstrate how a Bayesian framework for model selection based on Automatic Relevance Determination (ARD) can be adapted to the Tucker and CP models. By assigning priors for the model...... parameters and learning the hyperparameters of these priors the method is able to turn off excess components and simplify the core structure at a computational cost of fitting the conventional Tucker/CP model. To investigate the impact of the choice of priors we based the ARD on both Laplace and Gaussian...... of components of data within the Tucker and CP structure. For the Tucker and CP model the approach performs better than heuristics such as the Bayesian Information Criterion, Akaikes Information Criterion, DIFFIT and the numerical convex hull (NumConvHull) while operating only at the cost of estimating...
Relevance of roughness parameters of surface finish in precision hard turning.
Jouini, Nabil; Revel, Philippe; Bigerelle, Maxence
2014-01-01
Precision hard turning is a process to improve the surface integrity of functional surfaces. Machining experiments are carried out on hardened AISI 52100 bearing steel under dry condition using c-BN cutting tools. A full factorial experimental design is used to characterize the effect of cutting parameters. As surface topography is characterized by numerous roughness parameters, their relative relevance is investigated by statistical indices of performance computed by combining the analysis of variance, discriminant analysis and the bootstrap method. The analysis shows that the profile Length ratio (Lr) and the Roughness average (Ra) are the relevant pair of roughness parameters which best discriminates the effect of cutting parameters and enable the classification of surfaces which cannot be distinguished by one parameter: low profile length ratio Lr (Lr = 100.23%) is clearly distinguished from an irregular surface corresponding to a profile length ratio Lr (Lr = 100.42%), whereas the roughness average Ra values are nearly identical.
Order Parameters of the Dilute A Models
Warnaar, S O; Seaton, K A; Nienhuis, B
1993-01-01
The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.
User's Relevance of PIR System Based on Cloud Models
KANG Hai-yan; FAN Xiao-zhong
2006-01-01
A new method to evaluate fuzzily user's relevance on the basis of cloud models has been proposed.All factors of personalized information retrieval system are taken into account in this method. So using this method for personalized information retrieval (PIR) system can efficiently judge multi-value relevance, such as quite relevant, comparatively relevant, commonly relevant, basically relevant and completely non-relevant,and realize a kind of transform of qualitative concepts and quantity and improve accuracy of relevance judgements in PIR system. Experimental data showed that the method is practical and valid. Evaluation results are more accurate and approach to the fact better.
Bellmann, Susann; Carlander, David; Fasano, Alessio; Momcilovic, Dragan; Scimeca, Joseph A; Waldman, W James; Gombau, Lourdes; Tsytsikova, Lyubov; Canady, Richard; Pereira, Dora I A; Lefebvre, David E
2015-01-01
Many natural chemicals in food are in the nanometer size range, and the selective uptake of nutrients with nanoscale dimensions by the gastrointestinal (GI) tract is a normal physiological process. Novel engineered nanomaterials (NMs) can bring various benefits to food, e.g., enhancing nutrition. Assessing potential risks requires an understanding of the stability of these entities in the GI lumen, and an understanding of whether or not they can be absorbed and thus become systemically available. Data are emerging on the mammalian in vivo absorption of engineered NMs composed of chemicals with a range of properties, including metal, mineral, biochemical macromolecules, and lipid-based entities. In vitro and in silico fluid incubation data has also provided some evidence of changes in particle stability, aggregation, and surface properties following interaction with luminal factors present in the GI tract. The variables include physical forces, osmotic concentration, pH, digestive enzymes, other food, and endogenous biochemicals, and commensal microbes. Further research is required to fill remaining data gaps on the effects of these parameters on NM integrity, physicochemical properties, and GI absorption. Knowledge of the most influential luminal parameters will be essential when developing models of the GI tract to quantify the percent absorption of food-relevant engineered NMs for risk assessment. © 2015 The Authors. WIREs Nanomedicine and Nanobiotechnology published by Wiley Periodicals, Inc.
Roe, Byron
2013-01-01
The effect of correlations between model parameters and nuisance parameters is discussed, in the context of fitting model parameters to data. Modifications to the usual $\\chi^2$ method are required. Fake data studies, as used at present, will not be optimum. Problems will occur for applications of the Maltoni-Schwetz \\cite{ms} theorem. Neutrino oscillations are used as examples, but the problems discussed here are general ones, which are often not addressed.
Numerical modeling of partial discharges parameters
Kartalović Nenad M.
2016-01-01
Full Text Available In recent testing of the partial discharges or the use for the diagnosis of insulation condition of high voltage generators, transformers, cables and high voltage equipment develops rapidly. It is a result of the development of electronics, as well as, the development of knowledge about the processes of partial discharges. The aim of this paper is to contribute the better understanding of this phenomenon of partial discharges by consideration of the relevant physical processes in isolation materials and isolation systems. Prebreakdown considers specific processes, and development processes at the local level and their impact on specific isolation material. This approach to the phenomenon of partial discharges needed to allow better take into account relevant discharge parameters as well as better numerical model of partial discharges.
Zutz, H; Hupe, O; Ambrosi, P; Klammer, J
2012-09-01
Active electronic dosemeters using counting techniques are used for radioprotection purposes in pulsed radiation fields in X-ray diagnostics or therapy. The disadvantage of the limited maximum measurable dose rate becomes significant in these radiation fields and leads to some negative effects. In this study, a set of relevant parameters for a dosemeter is described, which can be used to decide whether it is applicable in a given radiation field or not. The determination of these relevant parameters-maximum measurable dose rate in the radiation pulse, dead time of the dosemeter, indication per counting event and measurement cycle time-is specified. The results of the first measurements on the determination of these parameters for an electronic personal dosemeter of the type Thermo Fisher Scientific EPD Mk2 are shown.
How to select the most relevant 3D roughness parameters of a surface.
Deltombe, R; Kubiak, K J; Bigerelle, M
2014-01-01
In order to conduct a comprehensive roughness analysis, around sixty 3D roughness parameters are created to describe most of the surface morphology with regard to specific functions, properties or applications. In this paper, a multiscale surface topography decomposition method is proposed with application to stainless steel (AISI 304), which is processed by rolling at different fabrication stages and by electrical discharge tool machining. Fifty-six 3D-roughness parameters defined in ISO, EUR, and ASME standards are calculated for the measured surfaces. Then, expert software "MesRug" is employed to perform statistical analysis on acquired data in order to find the most relevant parameters characterizing the effect of both processes (rolling and machining), and to determine the most appropriate scale of analysis. For the rolling process: The parameter Vmc (the Core Material Volume--defined as volume of material comprising the texture between heights corresponding to the material ratio values of p = 10% and q = 80%) computed at the scale of 3 µm is the most relevant parameter to characterize the cold rolling process. For the EDM Process, the best roughness parameter is SPD that represents the number of peaks per unit area after segmentation of a surface into motifs computed at the scale of 8 µm.
Retrieval of Atmospheric and Oceanic Parameters and the Relevant Numerical Calculation
无
2006-01-01
It is well known that retrieval of parameters is usually ill-posed and highly nonlinear, so parameter retrieval problems are very difficult. There are still many important theoretical issues under research,although great success has been achieved in data assimilation in meteorology and oceanography. This paper reviews the recent research on parameter retrieval, especially that of the authors. First, some concepts and issues of parameter retrieval are introduced and the state-of-the-art parameter retrieval technology in meteorology and oceanography is reviewed briefly, and then atmospheric and oceanic parameters are retrieved using the variational data assimilation method combined with the regularization techniques in four examples: retrieval of the vertical eddy diffusion coefficient; of the turbulivity of the atmospheric boundary layer; of wind from Doppler radar data, and of the physical process parameters. Model parameter retrieval with global and local observations is also introduced.
Implementing Relevance Feedback in the Bayesian Network Retrieval Model.
de Campos, Luis M.; Fernandez-Luna, Juan M.; Huete, Juan F.
2003-01-01
Discussion of relevance feedback in information retrieval focuses on a proposal for the Bayesian Network Retrieval Model. Bases the proposal on the propagation of partial evidences in the Bayesian network, representing new information obtained from the user's relevance judgments to compute the posterior relevance probabilities of the documents…
Effects of Glycerol and Creatine Hyperhydration on Doping-Relevant Blood Parameters
Karsten Koehler
2012-08-01
Full Text Available Glycerol is prohibited as an ergogenic aid by the World Anti-Doping Agency (WADA due to the potential for its plasma expansion properties to have masking effects. However, the scientific basis of the inclusion of Gly as a “masking agent” remains inconclusive. The purpose of this study was to determine the effects of a hyperhydrating supplement containing Gly on doping-relevant blood parameters. Nine trained males ingested a hyperhydrating mixture twice per day for 7 days containing 1.0 g•kg^{−1} body mass (BM of Gly, 10.0 g of creatine and 75.0 g of glucose. Blood samples were collected and total hemoglobin (Hb mass determined using the optimized carbon monoxide (CO rebreathing method pre- and post-supplementation. BM and total body water (TBW increased significantly following supplementation by 1.1 ± 1.2 and 1.0 ± 1.2 L (BM, P < 0.01; TBW, P < 0.01, respectively. This hyperhydration did not significantly alter plasma volume or any of the doping-relevant blood parameters (e.g., hematocrit, Hb, reticulocytes and total Hb-mass even when Gly was clearly detectable in urine samples. In conclusion, this study shows that supplementation with hyperhydrating solution containing Gly for 7 days does not significantly alter doping-relevant blood parameters.
Engel's biopsychosocial model is still relevant today.
Adler, Rolf H
2009-12-01
In 1977, Engel published the seminal paper, "The Need for a New Medical Model: A Challenge for Biomedicine" [Science 196 (1977) 129-136]. He featured a biopsychosocial (BPS) model based on systems theory and on the hierarchical organization of organisms. In this essay, the model is extended by the introduction of semiotics and constructivism. Semiotics provides the language which allows to describe the relationships between the individual and his environment. Constructivism explains how an organism perceives his environment. The impact of the BPS model on research, medical education, and application in the practice of medicine is discussed.
Mathematical Properties Relevant to Geomagnetic Field Modeling
Sabaka, Terence J.; Hulot, Gauthier; Olsen, Nils
2010-01-01
properties of those spatial mathematical representations are also discussed, especially in view of providing a formal justification for the fact that geomagnetic field models can indeed be constructed from ground-based and satellite-born observations, provided those reasonably approximate the ideal......Geomagnetic field modeling consists in converting large numbers of magnetic observations into a linear combination of elementary mathematical functions that best describes those observations.The set of numerical coefficients defining this linear combination is then what one refers...... be directly measured. In this chapter, the mathematical foundation of global (as opposed to regional) geomagnetic field modeling is reviewed, and the spatial modeling of the field in spherical coordinates is focussed. Time can be dealt with as an independent variable and is not explicitly considered...
Mathematical Properties Relevant to Geomagnetic Field Modeling
Sabaka, Terence J.; Hulot, Gauthier; Olsen, Nils
2014-01-01
properties of those spatial mathematical representations are also discussed, especially in view of providing a formal justification for the fact that geomagnetic field models can indeed be constructed from ground-based and satellite-born observations, provided those reasonably approximate the ideal situation......Geomagnetic field modeling consists in converting large numbers of magnetic observations into a linear combination of elementary mathematical functions that best describes those observations. The set of numerical coefficients defining this linear combination is then what one refers...... be directly measured. In this chapter, the mathematical foundation of global (as opposed to regional) geomagnetic field modeling is reviewed, and the spatial modeling of the field in spherical coordinates is focused. Time can be dealt with as an independent variable and is not explicitly considered...
Spatio-temporal modeling of nonlinear distributed parameter systems
Li, Han-Xiong
2011-01-01
The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s
Fractional Moment Bounds and Disorder Relevance for Pinning Models
Derrida, Bernard; Giacomin, Giambattista; Lacoin, Hubert; Toninelli, Fabio Lucio
2009-05-01
We study the critical point of directed pinning/wetting models with quenched disorder. The distribution K(·) of the location of the first contact of the (free) polymer with the defect line is assumed to be of the form K( n) = n - α-1 L( n), with α ≥ 0 and L(·) slowly varying. The model undergoes a (de)-localization phase transition: the free energy (per unit length) is zero in the delocalized phase and positive in the localized phase. For α 1, then quenched and annealed critical points differ whenever disorder is present, and we give the scaling form of their difference for small disorder. In agreement with the so-called Harris criterion, disorder is therefore relevant in this case. In the marginal case α = 1/2, under the assumption that L(·) vanishes sufficiently fast at infinity, we prove that the difference between quenched and annealed critical points, which is smaller than any power of the disorder strength, is positive: disorder is marginally relevant. Again, the case considered in [12,17] is out of our analysis and remains open. The results are achieved by setting the parameters of the model so that the annealed system is localized, but close to criticality, and by first considering a quenched system of size that does not exceed the correlation length of the annealed model. In such a regime we can show that the expectation of the partition function raised to a suitably chosen power {γ in (0, 1)} is small. We then exploit such an information to prove that the expectation of the same fractional power of the partition function goes to zero with the size of the system, a fact that immediately entails that the quenched system is delocalized.
Are invertebrates relevant models in ageing research?
Erdogan, Cihan Suleyman; Hansen, Benni Winding; Vang, Ole
2016-01-01
Ageing is the organisms increased susceptibility to death, which is linked to accumulated damage in the cells and tissues. Ageing is a complex process regulated by crosstalk of various pathways in the cells. Ageing is highly regulated by the Target of Rapamycin (TOR) pathway activity. TOR...... the molecular mechanisms underlying the ageing process faster than mammal systems. Inhibition of the TOR pathway activity via either genetic manipulation or rapamycin increases lifespan profoundly in most invertebrate model organisms. This contribution will review the recent findings in invertebrates concerning...... the TOR pathway and effects of TOR inhibition by rapamycin on lifespan. Besides some contradictory results, the majority points out that rapamycin induces longevity. This suggests that administration of rapamycin in invertebrates is a promising tool for pursuing the scientific puzzle of lifespan...
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
Pawanjeet S. Datta
2010-08-01
Full Text Available Topography is a crucial surface characteristic in soil erosion modeling. Soil erosion studies use a digital elevation model (DEM to derive the topographical characteristics of a study area. Majority of the times, a DEM is incorporated into erosion models as a given parameter and it is not tested as extensively as are the parameters related to soil, land-use and climate. This study compares erosion relevant topographical parameters—elevation, slope, aspect, LS factor—derived from 3 DEMs at original and 20 m interpolated resolution with field measurements for a 13 km2 watershed located in the Indian Lesser Himalaya. The DEMs are: a TOPO DEM generated from digitized contour lines on a 1:50,000 topographical map; a Shuttle Radar Topography Mission (SRTM DEM at 90-m resolution; and an Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER DEM at 15-m resolution. Significant differences across the DEMs were observed for all the parameters. The highest resolution ASTER DEM was found to be the poorest of all the tested DEMs as the topographical parameters derived from it differed significantly from those derived from other DEMs and field measurements. TOPO DEM, which is, theoretically more detailed, produced similar results to the coarser SRTM DEM, but failed to produce an improved representation of the watershed topography. Comparison with field measurements and mixed regression modeling proved SRTM DEM to be the most reliable among the tested DEMs for the studied watershed.
Location Criteria Relevant for Sustainability of Social Housing Model
Petković-Grozdanović Nataša
2016-01-01
Full Text Available Social housing models, which had began to develop during the last century, for their only objective had a need to overcome the housing problems of socially vulnerable categories. However, numerous studies have shown that these social categories, because of their low social status, are highly susceptible to various psychological and sociological problems. On the other hand a low level of quality, which was common for social housing dwellings, has further aggravated these problems by initiating trouble behaviours among tenants, affecting social exclusion and segregation. Contemporary social housing models are therefore conceptualized in a way to provide a positive psycho-sociological impact on their tenants. Therefore the planning approach in social housing should be such to: support important functions in daily life routines; promote tolerance and cooperation; influence on a sense of social order and belonging; affect the socialization of the tenant and their integration into the wider community; and improve social cohesion. Analysis of the influential location parameters of immediate and wider social housing environment strive to define the ones relevant to the life quality of social housing tenants and therefore influence on the sustainability of social housing model.
Exploring female mice interstrain differences relevant for models of depression
Daniela de Sá Calçada
2015-12-01
Full Text Available Depression is an extremely heterogeneous disorder. Diverse molecular mechanisms have been suggested to underlie its etiology. To understand the molecular mechanisms responsible for this complex disorder, researchers have been using animal models extensively, namely mice from various genetic backgrounds and harboring distinct genetic modifications. The use of numerous mouse models has contributed to enrich our knowledge on depression. However, accumulating data also revealed that the intrinsic characteristics of each mouse strain might influence the experimental outcomes, which may justify some conflicting evidence reported in the literature. To further understand the impact of the genetic background we performed a multimodal comparative study encompassing the most relevant parameters commonly addressed in depression in three of the most widely used mouse strains: Balb/c, C57BL/6 and CD-1. Moreover, female mice were selected for this study taken into account the higher prevalence of depression in woman and the fewer animal studies using this gender. Our results show that Balb/c mice have a more pronounced anxious-like behavior than CD-1 and C57BL/6 mice, whereas C57BL/6 animals present the strongest depressive-like trait. Furthermore, C57BL/6 mice display the highest rate of proliferating cells and brain-derived neurotrophic factor expression levels in the hippocampus, while hippocampal dentate granular neurons of Balb/c mice show smaller dendritic lengths and fewer ramifications. Of notice, the expression levels of inducible nitric oxide synthase (iNos predict 39,5% of the depressive-like behavior index, which suggests a key role of hippocampal iNOS in depression.Overall, this study reveals important interstrain differences in several behavioral dimensions and molecular and cellular parameters that should be considered when preparing and analyzing experiments addressing depression using mouse models. It further contributes to the literature by
Moolenaar, H.E.; Selten, F.M.
2004-01-01
Climate models contain numerous parameters for which the numeric values are uncertain. In the context of climate simulation and prediction, a relevant question is what range of climate outcomes is possible given the range of parameter uncertainties. Which parameter perturbation changes the climate i
Refinery effluent analysis methodologies for relevant parameters from EU-regulatory regimes
Westwood, D.; Ward, T. [Beta Technology Ltd, Heavens Walk, Doncaster, South Yorkshire DN4 5HZ (United Kingdom); Prescott, N.; Rippin, I. [Environment Agency, National Laboratory Service, PO Box 544 Rotherham S60 1BY (United Kingdom); Comber, M.; Den Haan, K.
2013-08-15
This report provides guidance to CONCAWE members on the analytical methods that might be used to monitor oil refinery effluents for those refinery-specific parameters covered by relevant European legislation and a comparison of the methods that are used today, as reported in the last Effluent Survey. A method assessment programme is presented whereby the performance of methods of analysis (used to monitor oil refinery effluents) can be compared and prioritised in order of their analytical performance capabilities. Methods for a specific parameter, which is clearly and unambiguously defined, are compared with each other and then prioritised in terms of their overall quality. The quality of these methods is based on an assessment of a combination of characteristic features, namely, precision, bias or recovery, limit of detection (where appropriate), indicative costs, and ease of use. Ranking scores for each feature are assigned to various ranges of each feature, and then added together to give an overall ranking value. The method exhibiting the lowest overall ranking value is deemed the most appropriate method for analysing that parameter. Within this report, several recommendations are made in terms of comparing results of analyses or their associated uses. Where data are to be compared for a particular parameter, then, all CONCAWE members involved in this comparison should agree common objectives, in advance. These include defining a common definition for the: (1) Parameter being analysed and compared; (2) Limit of detection, and how this concentration value should be calculated; (3) Limit of quantification, and how this concentration value should be calculated and how it is to be applied for selective reporting purposes; and (4) Uncertainty of measurement and how it should be calculated. It is further recommended that these involved members should agree on the range of values and ranking scores chosen to reflect the performance characteristic features used in the
A Compositional Relevance Model for Adaptive Information Retrieval
Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)
1994-01-01
There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.
Tiwari, Harinarayan; Sharma, Nayan
2017-05-01
This research paper focuses on the need of turbulence, instruments reliable to capture turbulence, different turbulence parameters and some advance methodology which can decompose various turbulence structures at different levels near hydraulic structures. Small-scale turbulence research has valid prospects in open channel flow. The relevance of the study is amplified as we introduce any hydraulic structure in the channel which disturbs the natural flow and creates discontinuity. To recover this discontinuity, the piano key weir (PKW) might be used with sloped keys. Constraints of empirical results in the vicinity of PKW necessitate extensive laboratory experiments with fair and reliable instrumentation techniques. Acoustic Doppler velocimeter was established to be best suited within range of some limitations using principal component analysis. Wavelet analysis is proposed to decompose the underlying turbulence structure in a better way.
Tiwari, Harinarayan; Sharma, Nayan
2015-03-01
This research paper focuses on the need of turbulence, instruments reliable to capture turbulence, different turbulence parameters and some advance methodology which can decompose various turbulence structures at different levels near hydraulic structures. Small-scale turbulence research has valid prospects in open channel flow. The relevance of the study is amplified as we introduce any hydraulic structure in the channel which disturbs the natural flow and creates discontinuity. To recover this discontinuity, the piano key weir (PKW) might be used with sloped keys. Constraints of empirical results in the vicinity of PKW necessitate extensive laboratory experiments with fair and reliable instrumentation techniques. Acoustic Doppler velocimeter was established to be best suited within range of some limitations using principal component analysis. Wavelet analysis is proposed to decompose the underlying turbulence structure in a better way.
Dependence of interface conductivity on relevant physical parameters in polarized Fermi mixtures
Ebrahimian, N., E-mail: n.ebrahimian@aut.ac.ir [Physics Department, Amirkabir University of Technology, Tehran 15914 (Iran, Islamic Republic of); Mehrafarin, M., E-mail: mehrafar@aut.ac.ir [Physics Department, Amirkabir University of Technology, Tehran 15914 (Iran, Islamic Republic of); Afzali, R., E-mail: afzali@kntu.ac.ir [Physics Department, KN Toosi University of Technology, Tehran 15418 (Iran, Islamic Republic of)
2012-10-15
We consider a mass-asymmetric polarized Fermi system in the presence of Hartree-Fock (HF) potentials. We concentrate on the BCS regime with various interaction strengths and numerically obtain the allowed values of the chemical and HF potentials, as well as the mass ratio. The functional dependence of the heat conductivity of the N-SF interface on relevant physical parameters, namely the temperature, the mass ratio, and the interaction strength, is obtained. In particular, we show that the interface conductivity starts to drop with decreasing temperature at the temperature, T{sub m}, where the mean kinetic energy of the particles is just sufficient to overcome the SF gap. We obtain T{sub m} as a function of the mass ratio and the interaction strength. The variation of the heat conductivity, at fixed temperature, with the HF potentials and the imbalance chemical potential is also obtained. Finally, because the range of relevant temperatures increases for larger values of the mass ratio, we consider the {sup 6}Li-{sup 40}K mixture separately by taking the temperature dependence of the pair potential into account.
Dependence of interface conductivity on relevant physical parameters in polarized Fermi mixtures
Ebrahimian, N.; Mehrafarin, M.; Afzali, R.
2012-10-01
We consider a mass-asymmetric polarized Fermi system in the presence of Hartree-Fock (HF) potentials. We concentrate on the BCS regime with various interaction strengths and numerically obtain the allowed values of the chemical and HF potentials, as well as the mass ratio. The functional dependence of the heat conductivity of the N-SF interface on relevant physical parameters, namely the temperature, the mass ratio, and the interaction strength, is obtained. In particular, we show that the interface conductivity starts to drop with decreasing temperature at the temperature, Tm, where the mean kinetic energy of the particles is just sufficient to overcome the SF gap. We obtain Tm as a function of the mass ratio and the interaction strength. The variation of the heat conductivity, at fixed temperature, with the HF potentials and the imbalance chemical potential is also obtained. Finally, because the range of relevant temperatures increases for larger values of the mass ratio, we consider the 6Li-40K mixture separately by taking the temperature dependence of the pair potential into account.
Parameters Relevant to Bubble Detachment when Gas-injecting into Polymer Melt Flow Field
CHEN Zailiang; CAI Yebin; GUO Mingcheng; PENG Yucheng
2005-01-01
The bubble deformation processes were reported when gas was injected into polymer melt flow field in another paper, the experiments showed that the deformation was severely affected by the volume of the bubble, and in turn, for the different bubbles, several different deformation processes were presented during their movement along the flow channel. In addition, we could find that the magnitude of the bubble volume was dependent upon the pressure difference of the gas injection pressure and the melt pressure. In this paper, more experimental conditions were changed to investigate the parameters relevant to the detachment of bubbles from the injection nozzle. The experimental results show that the pressure difference, the melt flow velocity as well as the melt pressure were all critical for the parameters, such as the bubble detachment time, the maximum bubble diameters and the magnitude of the bubble volume. The morphology changes of bubble were very large when the flow field was abruptly changed, and the situations were more complicated.
Robust estimation of hydrological model parameters
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter counting in models with global symmetries
Berger, Joshua [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: jb454@cornell.edu; Grossman, Yuval [Institute for High Energy Phenomenology, Newman Laboratory of Elementary Particle Physics, Cornell University, Ithaca, NY 14853 (United States)], E-mail: yuvalg@lepp.cornell.edu
2009-05-18
We present rules for determining the number of physical parameters in models with exact flavor symmetries. In such models the total number of parameters (physical and unphysical) needed to described a matrix is less than in a model without the symmetries. Several toy examples are studied in order to demonstrate the rules. The use of global symmetries in studying the minimally supersymmetric standard model (MSSM) is examined.
On parameter estimation in deformable models
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...
Cosmological models with constant deceleration parameter
Berman, M.S.; de Mello Gomide, F.
1988-02-01
Berman presented elsewhere a law of variation for Hubble's parameter that yields constant deceleration parameter models of the universe. By analyzing Einstein, Pryce-Hoyle and Brans-Dicke cosmologies, we derive here the necessary relations in each model, considering a perfect fluid.
Trait Characteristics of Diffusion Model Parameters
Anna-Lena Schubert
2016-07-01
Full Text Available Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.
Parameter identification in the logistic STAR model
Ekner, Line Elvstrøm; Nejstgaard, Emil
We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter is that th......We propose a new and simple parametrization of the so-called speed of transition parameter of the logistic smooth transition autoregressive (LSTAR) model. The new parametrization highlights that a consequence of the well-known identification problem of the speed of transition parameter...
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Relevant parameter space and stability of spherical tokamaks with a plasma center column
Lampugnani, L. G.; Garcia-Martinez, P. L.; Farengo, R.
2017-02-01
A spherical tokamak (ST) with a plasma center column (PCC) can be formed inside a simply connected chamber via driven magnetic relaxation. From a practical perspective, the ST-PCC could overcome many difficulties associated with the material center column of the standard ST reactor design. Besides, the ST-PCC concept can be regarded as an advanced helicity injected device that would enable novel experiments on the key physics of magnetic relaxation and reconnection. This is because the concept includes not only a PCC but also a coaxial helicity injector (CHI). This combination implies an improved level of flexibility in the helicity injection scheme required for the formation and sustainment phases. In this work, the parameter space determining the magnetic structure of the ST-PCC equilibria is studied under the assumption of fully relaxed plasmas. In particular, it is shown that the effect of the external bias field of the PCC and the CHI essentially depends on a single parameter that measures the relative amount of flux of these two entities. The effect of plasma elongation on the safety factor profile and the stability to the tilt mode are also analyzed. In the first part of this work, the stability of the system is explained in terms of the minimum energy principle, and relevant stability maps are constructed. While this picture provides an adequate insight into the underlying physics of the instability, it does not include the stabilizing effect of line-tying at the electrodes. In the second part, a dynamical stability analysis of the ST-PCC configurations, including the effect of line-tying, is performed by numerically solving the magnetohydrodynamic equations. A significant stability enhancement is observed when the PCC contains more than the 70% of the total external bias flux, and the elongation is not higher than two.
Application of lumped-parameter models
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil. Subsequently, the assembly of the dynamic stiffness matrix for the foundation is considered, and the solution for obtaining the steady state response, when using lumped-parameter models is given. (au)
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Downward, L.; Booth, C.H.; Lukens, W.W.; Bridges, F.
2006-07-25
A general problem when fitting EXAFS data is determining whether particular parameters are statistically significant. The F-test is an excellent way of determining relevancy in EXAFS because it only relies on the ratio of the fit residual of two possible models, and therefore the data errors approximately cancel. Although this test is widely used in crystallography (there, it is often called a 'Hamilton test') and has been properly applied to EXAFS data in the past, it is very rarely applied in EXAFS analysis. We have implemented a variation of the F-test adapted for EXAFS data analysis in the RSXAP analysis package, and demonstrate its applicability with a few examples, including determining whether a particular scattering shell is warranted, and differentiating between two possible species or two possible structures in a given shell.
Statefinder parameters in two dark energy models
Panotopoulos, Grigoris
2007-01-01
The statefinder parameters ($r,s$) in two dark energy models are studied. In the first, we discuss in four-dimensional General Relativity a two fluid model, in which dark energy and dark matter are allowed to interact with each other. In the second model, we consider the DGP brane model generalized by taking a possible energy exchange between the brane and the bulk into account. We determine the values of the statefinder parameters that correspond to the unique attractor of the system at hand. Furthermore, we produce plots in which we show $s,r$ as functions of red-shift, and the ($s-r$) plane for each model.
Wind Farm Decentralized Dynamic Modeling With Parameters
Soltani, Mohsen; Shakeri, Sayyed Mojtaba; Grunnet, Jacob Deleuran;
2010-01-01
Development of dynamic wind flow models for wind farms is part of the research in European research FP7 project AEOLUS. The objective of this report is to provide decentralized dynamic wind flow models with parameters. The report presents a structure for decentralized flow models with inputs from...
Setting Parameters for Biological Models With ANIMO
Schivo, Stefano; Scholma, Jetse; Karperien, Hermanus Bernardus Johannes; Post, Janine Nicole; van de Pol, Jan Cornelis; Langerak, Romanus; André, Étienne; Frehse, Goran
2014-01-01
ANIMO (Analysis of Networks with Interactive MOdeling) is a software for modeling biological networks, such as e.g. signaling, metabolic or gene networks. An ANIMO model is essentially the sum of a network topology and a number of interaction parameters. The topology describes the interactions
Delineating Parameter Unidentifiabilities in Complex Models
Raman, Dhruva V; Papachristodoulou, Antonis
2016-01-01
Scientists use mathematical modelling to understand and predict the properties of complex physical systems. In highly parameterised models there often exist relationships between parameters over which model predictions are identical, or nearly so. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, and the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast timescale subsystems, as well as the regimes in which such approximations are valid. We base our algorithm on a novel quantification of regional parametric sensitivity: multiscale sloppiness. Traditional...
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Relevant Criteria for Testing the Quality of Turbulence Models
Frandsen, Sten; Jørgensen, Hans E.; Sørensen, John Dalsgaard
2007-01-01
turbines when seeking wind characteristics that correspond to one blade and the entire rotor, respectively. For heights exceeding 50-60m the gust factor increases with wind speed. For heights larger the 60-80m, present assumptions on the value of the gust factor are significantly conservative, both for 3......Seeking relevant criteria for testing the quality of turbulence models, the scale of turbulence and the gust factor have been estimated from data and compared with predictions from first-order models of these two quantities. It is found that the mean of the measured length scales is approx. 10......% smaller than the IEC model, for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3sec and 10sec pre-averaging of wind speed data are relevant for MW-size wind...
Parameter Estimation for Thurstone Choice Models
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Graphene Based Waveguide Polarizers: In-Depth Physical Analysis and Relevant Parameters
de Oliveira, Rafael E P
2015-01-01
Optical polarizing devices exploiting graphene embedded in waveguides have been demonstrated in the literature recently and both the TE- and TM-pass behaviors were reported. The determination of the passing polarization is usually attributed to graphene's Fermi level (and, therefore, doping level), with, however, no direct confirmation of this assumption provided. Here we show, through numerical simulation, that rather than graphene's Fermi level, the passing polarization is determined by waveguide parameters, such as the superstrate refractive index and the waveguide's height. The results provide a consistent explanation for experimental results reported in the literature. In addition, we show that with an accurate graphene modeling, a waveguide cannot be switched between TE pass and TM pass via Fermi level tuning. Therefore, the usually overlooked contribution of the waveguide design is shown to be essential for the development of optimized TE- or TM-pass polarizers, which we show to be due to the control i...
QCD-inspired determination of NJL model parameters
Springer, Paul; Rechenberger, Stefan; Rennecke, Fabian
2016-01-01
The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.
Delineating parameter unidentifiabilities in complex models
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Application of lumped-parameter models
Ibsen, Lars Bo; Liingaard, Morten
This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse...
Models and parameters for environmental radiological assessments
Miller, C W [ed.
1984-01-01
This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)
Investigations on caesium-free alternatives for H- formation at ion source relevant parameters
Kurutz, U.; Fantz, U.
2015-04-01
Negative hydrogen ions are efficiently produced in ion sources by the application of caesium. Due to a thereby induced lowering of the work function of a converter surface a direct conversion of impinging hydrogen atoms and positive ions into negative ions is maintained. However, due to the complex caesium chemistry and dynamics a long-term behaviour is inherent for the application of caesium that affects the stability and reliability of negative ion sources. To overcome these drawbacks caesium-free alternatives for efficient negative ion formation are investigated at the flexible laboratory setup HOMER (HOMogenous Electron cyclotron Resonance plasma). By the usage of a meshed grid the tandem principle is applied allowing for investigations on material induced negative ion formation under plasma parameters relevant for ion source operation. The effect of different sample materials on the ratio of the negative ion density to the electron density nH- /ne is compared to the effect of a stainless steel reference sample and investigated by means of laser photodetachment in a pressure range from 0.3 to 3 Pa. For the stainless steel sample no surface induced effect on the negative ion density is present and the measured negative ion densities are resulting from pure volume formation and destruction processes. In a first step the dependency of nH- /ne on the sample distance has been investigated for a caesiated stainless steel sample. At a distance of 0.5 cm at 0.3 Pa the density ratio is 3 times enhanced compared to the reference sample confirming the surface production of negative ions. In contrast for the caesium-free material samples, tantalum and tungsten, the same dependency on pressure and distance nH- /ne like for the stainless steel reference sample were obtained within the error margins: A density ratio of around 14.5% is measured at 4.5 cm sample distance and 0.3 Pa, linearly decreasing with decreasing distance to 7% at 1.5 cm. Thus, tantalum and tungsten do not
Potential of polarization lidar to provide profiles of CCN- and INP-relevant aerosol parameters
Mamouri, Rodanthi-Elisavet; Ansmann, Albert
2016-05-01
We investigate the potential of polarization lidar to provide vertical profiles of aerosol parameters from which cloud condensation nucleus (CCN) and ice nucleating particle (INP) number concentrations can be estimated. We show that height profiles of particle number concentrations n50, dry considering dry aerosol particles with radius > 50 nm (reservoir of CCN in the case of marine and continental non-desert aerosols), n100, dry (particles with dry radius > 100 nm, reservoir of desert dust CCN), and of n250, dry (particles with dry radius > 250 nm, reservoir of favorable INP), as well as profiles of the particle surface area concentration sdry (used in INP parameterizations) can be retrieved from lidar-derived aerosol extinction coefficients σ with relative uncertainties of a factor of 1.5-2 in the case of n50, dry and n100, dry and of about 25-50 % in the case of n250, dry and sdry. Of key importance is the potential of polarization lidar to distinguish and separate the optical properties of desert aerosols from non-desert aerosol such as continental and marine particles. We investigate the relationship between σ, measured at ambient atmospheric conditions, and n50, dry for marine and continental aerosols, n100, dry for desert dust particles, and n250, dry and sdry for three aerosol types (desert, non-desert continental, marine) and for the main lidar wavelengths of 355, 532, and 1064 nm. Our study is based on multiyear Aerosol Robotic Network (AERONET) photometer observations of aerosol optical thickness and column-integrated particle size distribution at Leipzig, Germany, and Limassol, Cyprus, which cover all realistic aerosol mixtures. We further include AERONET data from field campaigns in Morocco, Cabo Verde, and Barbados, which provide pure dust and pure marine aerosol scenarios. By means of a simple CCN parameterization (with n50, dry or n100, dry as input) and available INP parameterization schemes (with n250, dry and sdry as input) we finally compute
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
An Optimization Model of Tunnel Support Parameters
Su Lijuan
2015-05-01
Full Text Available An optimization model was developed to obtain the ideal values of the primary support parameters of tunnels, which are wide-ranging in high-speed railway design codes when the surrounding rocks are at the III, IV, and V levels. First, several sets of experiments were designed and simulated using the FLAC3D software under an orthogonal experimental design. Six factors, namely, level of surrounding rock, buried depth of tunnel, lateral pressure coefficient, anchor spacing, anchor length, and shotcrete thickness, were considered. Second, a regression equation was generated by conducting a multiple linear regression analysis following the analysis of the simulation results. Finally, the optimization model of support parameters was obtained by solving the regression equation using the least squares method. In practical projects, the optimized values of support parameters could be obtained by integrating known parameters into the proposed model. In this work, the proposed model was verified on the basis of the Liuyang River Tunnel Project. Results show that the optimization model significantly reduces related costs. The proposed model can also be used as a reliable reference for other high-speed railway tunnels.
Analysis of Modeling Parameters on Threaded Screws.
Vigil, Miquela S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vangoethem, Douglas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
Bioprinting towards Physiologically Relevant Tissue Models for Pharmaceutics.
Peng, Weijie; Unutmaz, Derya; Ozbolat, Ibrahim T
2016-09-01
Improving the ability to predict the efficacy and toxicity of drug candidates earlier in the drug discovery process will speed up the introduction of new drugs into clinics. 3D in vitro systems have significantly advanced the drug screening process as 3D tissue models can closely mimic native tissues and, in some cases, the physiological response to drugs. Among various in vitro systems, bioprinting is a highly promising technology possessing several advantages such as tailored microarchitecture, high-throughput capability, coculture ability, and low risk of cross-contamination. In this opinion article, we discuss the currently available tissue models in pharmaceutics along with their limitations and highlight the possibilities of bioprinting physiologically relevant tissue models, which hold great potential in drug testing, high-throughput screening, and disease modeling.
The Lund Model at Nonzero Impact Parameter
Janik, R A; Janik, Romuald A.; Peschanski, Robi
2003-01-01
We extend the formulation of the longitudinal 1+1 dimensional Lund model to nonzero impact parameter using the minimal area assumption. Complete formulae for the string breaking probability and the momenta of the produced mesons are derived using the string worldsheet Minkowskian helicoid geometry. For strings stretched into the transverse dimension, we find probability distribution with slope linear in m_T similar to the statistical models but without any thermalization assumptions.
IMPROVEMENT OF FLUID PIPE LUMPED PARAMETER MODEL
Kong Xiaowu; Wei Jianhua; Qiu Minxiu; Wu Genmao
2004-01-01
The traditional lumped parameter model of fluid pipe is introduced and its drawbacks are pointed out.Furthermore, two suggestions are put forward to remove these drawbacks.Firstly, the structure of equivalent circuit is modified, and then the evaluation of equivalent fluid resistance is change to take the frequency-dependent friction into account.Both simulation and experiment prove that this model is precise to characterize the dynamic behaviors of fluid in pipe.
Consistent Stochastic Modelling of Meteocean Design Parameters
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...... velocity, and water level is presented. The stochastic model includes statistical uncertainty and dependency between the four stochastic variables. Further, a new stochastic model for annual maximum directional significant wave heights is presented. The model includes dependency between the maximum wave...... height from neighboring directional sectors. Numerical examples are presented where the models are calibrated using the Maximum Likelihood method to data from the central part of the North Sea. The calibration of the directional distributions is made such that the stochastic model for the omnidirectional...
Testing Linear Models for Ability Parameters in Item Response Models
Glas, Cees A.W.; Hendrawan, Irene
2005-01-01
Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum like
Páez, Rocío Isabel; Efthymiopoulos, Christos
2015-02-01
The possibility that giant extrasolar planets could have small Trojan co-orbital companions has been examined in the literature from both viewpoints of the origin and dynamical stability of such a configuration. Here we aim to investigate the dynamics of hypothetical small Trojan exoplanets in domains of secondary resonances embedded within the tadpole domain of motion. To this end, we consider the limit of a massless Trojan companion of a giant planet. Without other planets, this is a case of the elliptic restricted three body problem (ERTBP). The presence of additional planets (hereafter referred to as the restricted multi-planet problem, RMPP) induces new direct and indirect secular effects on the dynamics of the Trojan body. The paper contains a theoretical and a numerical part. In the theoretical part, we develop a Hamiltonian formalism in action-angle variables, which allows us to treat in a unified way resonant dynamics and secular effects on the Trojan body in both the ERTBP or the RMPP. In both cases, our formalism leads to a decomposition of the Hamiltonian in two parts, . , called the basic model, describes resonant dynamics in the short-period (epicyclic) and synodic (libration) degrees of freedom, while contains only terms depending trigonometrically on slow (secular) angles. is formally identical in the ERTBP and the RMPP, apart from a re-definition of some angular variables. An important physical consequence of this analysis is that the slow chaotic diffusion along resonances proceeds in both the ERTBP and the RMPP by a qualitatively similar dynamical mechanism. We found that this is best approximated by the paradigm of `modulational diffusion'. In the paper's numerical part, we then focus on the ERTBP in order to make a detailed numerical demonstration of the chaotic diffusion process along resonances. Using color stability maps, we first provide a survey of the resonant web for characteristic mass parameter values of the primary, in which the
Comparing the ecological relevance of four wave exposure models
Sundblad, G.; Bekkby, T.; Isæus, M.; Nikolopoulos, A.; Norderhaug, K. M.; Rinde, E.
2014-03-01
Wave exposure is one of the main structuring forces in the marine environment. Methods that enable large scale quantification of environmental variables have become increasingly important for predicting marine communities in the context of spatial planning and coastal zone management. Existing methods range from cartographic solutions to numerical hydrodynamic simulations, and differ in the scale and spatial coverage of their outputs. Using a biological exposure index we compared the performance of four wave exposure models ranging from simple to more advanced techniques. All models were found to be related to the biological exposure index and their performance, measured as bootstrapped R2 distributions, overlapped. Qualitatively, there were differences in the spatial patterns indicating higher complexity with more advanced techniques. In order to create complex spatial patterns wave exposure models should include diffraction, especially in coastal areas rich in islands. The inclusion of wind strength and frequency, in addition to wind direction and bathymetry, further tended to increase the amount of explained variation. The large potential of high-resolution numerical models to explain the observed patterns of species distribution in complex coastal areas provide exciting opportunities for future research. Easy access to relevant wave exposure models will aid large scale habitat classification systems and the continuously growing field of marine species distribution modelling, ultimately serving marine spatial management and planning.
Modelling spin Hamiltonian parameters of molecular nanomagnets.
Gupta, Tulika; Rajaraman, Gopalan
2016-07-12
Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs.
Are animal models relevant to key aspects of human parturition?
Mitchell, Bryan F; Taggart, Michael J
2009-09-01
Preterm birth remains the most serious complication of pregnancy and is associated with increased rates of infant death or permanent neurodevelopmental disability. Our understanding of the regulation of parturition remains inadequate. The scientific literature, largely derived from rodent animal models, suggests two major mechanisms regulating the timing of parturition: the withdrawal of the steroid hormone progesterone and a proinflammatory response by the immune system. However, available evidence strongly suggests that parturition in the human has significantly different regulators and mediators from those in most of the animal models. Our objectives are to critically review the data and concepts that have arisen from use of animal models for parturition and to rationalize the use of a new model. Many animal models have contributed to advances in our understanding of the regulation of parturition. However, we suggest that those animals dependent on progesterone withdrawal to initiate parturition clearly have a limitation to their translation to the human. In such models, a linear sequence of events (e.g., luteolysis, progesterone withdrawal, uterine activation, parturition) gives rise to the concept of a "trigger" mechanism. Conversely, we propose that human parturition may arise from the concomitant maturation of several systems in parallel. We have termed this novel concept "modular accumulation of physiological systems" (MAPS). We also emphasize the urgency to determine the precise role of the immune system in the process of parturition in situations other than intrauterine infection. Finally, we accentuate the need to develop a nonprimate animal model whose physiology is more relevant to human parturition. We suggest that the guinea pig displays several key physiological characteristics of gestation that more closely resemble human pregnancy than do currently favored animal models. We conclude that the application of novel concepts and new models are
Systematic parameter inference in stochastic mesoscopic modeling
Lei, Huan; Li, Zhen; Karniadakis, George
2016-01-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are sparse. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space....
Auxiliary Parameter MCMC for Exponential Random Graph Models
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Modelling tourists arrival using time varying parameter
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
Inclusion of Relevance Information in the Term Discrimination Model.
Biru, Tesfaye; And Others
1989-01-01
Discusses the effect of including relevance data on the calculation of term discrimination values in bibliographic databases. Algorithms that calculate the ability of index terms to discriminate between relevant and non-relevant documents are described and tested. The results are discussed in terms of the relationship between term frequency and…
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Parameter optimization in S-system models
Vasconcelos Ana
2008-04-01
Full Text Available Abstract Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well.
Dähne, Sven; Meinecke, Frank C; Haufe, Stefan; Höhne, Johannes; Tangermann, Michael; Müller, Klaus-Robert; Nikulin, Vadim V
2014-02-01
relevant parameters.
Baeza Sanz, Domingo; García del Jalón, Diego
2005-08-01
The biotic composition, structure, and function of aquatic, wetland, and riparian ecosystem depend largely on the hydrological regime ( Poff, N.L., Ward, J.V., 1990. Implications of streamflow variability and predictability for lotic community structure: a regional analysis of streamflow patterns. Can. J. Fisheries Aquat. Sci. 46, 1805-1818; Richter, B.D., Baumgartner, J.V., Wiginton, R., Braun, D.P., 1997 How much water does a river need? Freshwater Biol. 37, 231-249). Available flow data for many rivers in the world can be used to validate these ecological theories. There is a demand for studies that use hydrological indices to establish criteria, which serve to group together regime types at a local level. Once this has been done, these hydrologically similar groups can be used to identify communities of living organisms that are linked to specific aspects of the river's behaviour. An approach to characterise flow regimes in the river network of the Tagus basin in Spain is presented. The river Tagus (río Tajo) is one of the seven major rivers of the Iberian peninsula. All hydrological data were acquired from the measurements made in the Tagus basin, at 25 gauging stations. Twelve variables were derived for each gauged site to describe variability and predictability of average streamflow conditions, and to describe the frequency, timing and intensity of high flow and low flow extremes. A hierarchical clustering routine was used to identify similar groups of rivers as defined in terms of similar characteristics of their streamflow regime. The variables were also examined with simple correlations to determine if multicollinearity occurred, in order to reject redundant parameters or to identify similar behaviour trends between pairs of parameters. Some parameters have shown a tendency to increase or decrease along the east-west axis, suggesting that some of the studied characteristics may have a geographical cause. Cluster analysis, with the values of the 12
Modeling of Parameters of Subcritical Assembly SAD
Petrochenkov, S; Puzynin, I
2005-01-01
The accepted conceptual design of the experimental Subcritical Assembly in Dubna (SAD) is based on the MOX core with a nominal unit capacity of 25 kW (thermal). This corresponds to the multiplication coefficient $k_{\\rm eff} =0.95$ and accelerator beam power 1 kW. A subcritical assembly driven with the existing 660 MeV proton accelerator at the Joint Institute for Nuclear Research has been modelled in order to make choice of the optimal parameters for the future experiments. The Monte Carlo method was used to simulate neutron spectra, energy deposition and doses calculations. Some of the calculation results are presented in the paper.
de Kam, Pieter-Jan; El Galta, Rachid; Kruithof, Annelieke C; Fennema, Hein; van Lierop, Marie-José; Mihara, Katsuhiro; Burggraaf, Jacobus; Moerland, Matthijs; Peeters, Pierre; Troyer, Matthew D
2013-12-01
This study evaluated interaction potential between sugammadex and aspirin on platelet aggregation. This was a randomized, double-blind, placebo-controlled, four-period crossover study in 26 healthy adult males. Treatments were i.v. placebo, i.v. sugammadex 4 mg/kg, and i.v. placebo/sugammadex with oncedaily oral aspirin 75 mg. Primary objective was to assess interaction between sugammadex and aspirin on platelet aggregation using collagen-induced whole-blood aggregometry. Effects on activated partial thromboplastin time (APTT) and cutaneous bleeding time were also evaluated. Platelet aggregation and APTT were evaluated by geometric mean ratios, using area-under-effect curves 3 - 30 minutes after sugammadex/placebo dosing. Bleeding time ratio was evaluated at 5 minutes post-dosing. Non-inferiority margins were pre-specified via literature review. Type I error was controlled using a hierarchical strategy. Ratio for platelet aggregation for aspirin with sugammadex vs. aspirin alone was 1.01, with lower limit of two-sided 90% CI of 0.91(above non-inferiority margin of 0.75). Ratio for statistical interaction between sugammadex and aspirin on APTT was 1.01, with upper 90% CI of 1.04 (below non-inferiority margin of 1.50), and for sugammadex vs. placebo alone was 1.06, with an upper 90% CI of 1.07 (below non-inferiority margin of 1.50). Ratio for bleeding time for aspirin with sugammadex vs. aspirin plus placebo was 1.20, with upper 90% CI of 1.45 (below non-inferiority margin of 1.50). Sugammadex was generally well tolerated. There was no clinically relevant reduction in platelet aggregation with addition of sugammadex 4 mg/kg to aspirin. Pre-determined non-inferiority margins were not exceeded for bleeding time and APTT.
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
Moose models with vanishing $S$ parameter
Casalbuoni, R; Dominici, Daniele
2004-01-01
In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the $S$ parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on $K$ SU(2) gauge groups, $K+1$ chiral fields and electroweak groups $SU(2)_L$ and $U(1)_Y$ at the ends of the chain of the moose. $S$ vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical non local field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of $S$ through an exponential behavior of the link couplings as suggested by Randall Sundrum metric.
Model parameters for simulation of physiological lipids
McGlinchey, Nicholas
2016-01-01
Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972
Potential of polarization lidar to provide profiles of CCN- and INP-relevant aerosol parameters
R. E. Mamouri
2015-12-01
Full Text Available We investigate the potential of polarization lidar to provide vertical profiles of aerosol parameters from which cloud condensation nucleus (CCN and ice nucleating particle (INP number concentrations can be estimated. We show that height profiles of number concentrations of aerosol particles with radius > 50 nm (APC50, reservoir of favorable CCN and with radius > 250 nm (APC250, reservoir of favorable INP, as well as profiles of the aerosol particle surface area concentration (ASC, used in INP parameterization can be retrieved from lidar-derived aerosol extinction coefficients (AEC with relative uncertainties of a factor of around 2 (APC50, and of about 25–50 % (APC250, ASC. Of key importance is the potential of polarization lidar to identify mineral dust particles and to distinguish and separate the aerosol properties of basic aerosol types such as mineral dust and continental pollution (haze, smoke. We investigate the relationship between AEC and APC50, APC250, and ASC for the main lidar wavelengths of 355, 532 and 1064 nm and main aerosol types (dust, pollution, marine. Our study is based on multiyear Aerosol Robotic Network (AERONET photometer observations of aerosol optical thickness and column-integrated particle size distribution at Leipzig, Germany, and Limassol, Cyprus, which cover all realistic aerosol mixtures of continental pollution, mineral dust, and marine aerosol. We further include AERONET data from field campaigns in Morocco, Cabo Verde, and Barbados, which provide pure dust and pure marine aerosol scenarios. By means of a simple relationship between APC50 and the CCN-reservoir particles (APCCCN and published INP parameterization schemes (with APC250 and ASC as input we finally compute APCCCN and INP concentration profiles. We apply the full methodology to a lidar observation of a heavy dust outbreak crossing Cyprus with dust up to 8 km height and to a case during which anthropogenic pollution dominated.
On the Influence of Material Parameters in a Complex Material Model for Powder Compaction
Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart
2016-10-01
Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.
Quijano, Laura; Chaparro, Marcos A. E.; Marié, Débora C.; Gaspar, Leticia; Navas, Ana
2014-09-01
The main sources of magnetic minerals in soils unaffected by anthropogenic pollution are iron oxides and hydroxides derived from parent materials through soil formation processes. Soil magnetic minerals can be used as indicators of environmental factors including soil forming processes, degree of pedogenesis, weathering processes and biological activities. In this study measurements of magnetic susceptibility are used to detect the presence and the concentration of soil magnetic minerals in topsoil and bulk samples in a small cultivated field, which forms a hydrological unit that can be considered to be representative of the rainfed agroecosystems of Mediterranean mountain environments. Additional magnetic studies such as isothermal remanent magnetization (IRM), anhysteretic remanent magnetization (ARM) and thermomagnetic measurements are used to identify and characterize the magnetic mineralogy of soil minerals. The objectives were to analyse the spatial variability of the magnetic parameters to assess whether topographic factors, soil redistribution processes, and soil properties such as soil texture, organic matter and carbonate contents analysed in this study, are related to the spatial distribution pattern of magnetic properties. The medians of mass specific magnetic susceptibility at low frequency (χlf) were 36.0 and 31.1 × 10-8 m3 kg-1 in bulk and topsoil samples respectively. High correlation coefficients were found between the χlf in topsoil and bulk core samples (r = 0.951, p < 0.01). In addition, volumetric magnetic susceptibility was measured in situ in the field (κis) and values varied from 13.3 to 64.0 × 10-5 SI. High correlation coefficients were found between χlf in topsoil measured in the laboratory and volumetric magnetic susceptibility field measurements (r = 0.894, p < 0.01). The results obtained from magnetic studies such as IRM, ARM and thermomagnetic measurements show the presence of magnetite, which is the predominant magnetic carrier
Advancing the Physics Basis of Quiescent H-mode through Exploration of ITER Relevant Parameters
Solomon, W. M. [PPPL; Burrell, K. H. [General Atomics; Fenstermacher, M. E. [LLNL; Garofalo, A. M. [General Atomics; Grierson, B. A. [PPPL; Loarte, A. [ITER; McKee, G. R. [U of Wisc, Madison; Nazikian, R. [PPPL; Snyder, B. P. [General Atomics
2014-09-01
Recent experiments on DIII-D have overcome a long-standing limitation in accessing quiescent H-mode (QH-mode), a high confinement state of the plasma that does not exhibit the explosive instabilities associated with edge localized modes (ELMs). In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (ηe ≈ 7 × 1019 m—3) and normalized Greenwald fraction (ηe/ηG > 0:7) . In these plasmas, the pedestal can evolve to very high pressures and current as the density is increased. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimental observed evolution with density. The comparison of the dependence of the maximum density threshold for QH-mode with plasma shape help validate the underlying theoretical peeling-ballooning models describing ELM stability. High density QH-mode operation with strong shaping has allowed stable access to a previously predicted regime of very high pedestal dubbed \\Super H-mode". In general, QH-mode is found to achieve ELM-stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E Χ B shear is observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for ITER's Q=10 mission.
Uncertainty Quantification for Optical Model Parameters
Lovell, A E; Sarich, J; Wild, S M
2016-01-01
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...
Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis
Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.
2005-12-01
The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.
Gonzalez, Sergio C; Soto-Centeno, J Angel; Reed, David L
2011-09-19
Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications.
Parameter Optimisation for the Behaviour of Elastic Models over Time
Mosegaard, Jesper
2004-01-01
Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method...... that will optimise parameters based on the behaviour of the elastic models over time....
Model Identification of Linear Parameter Varying Aircraft Systems
Fujimore, Atsushi; Ljung, Lennart
2007-01-01
This article presents a parameter estimation of continuous-time polytopic models for a linear parameter varying (LPV) system. The prediction error method of linear time invariant (LTI) models is modified for polytopic models. The modified prediction error method is applied to an LPV aircraft system whose varying parameter is the flight velocity and model parameters are the stability and control derivatives (SCDs). In an identification simulation, the polytopic model is more suitable for expre...
[Calculation of parameters in forest evapotranspiration model].
Wang, Anzhi; Pei, Tiefan
2003-12-01
Forest evapotranspiration is an important component not only in water balance, but also in energy balance. It is a great demand for the development of forest hydrology and forest meteorology to simulate the forest evapotranspiration accurately, which is also a theoretical basis for the management and utilization of water resources and forest ecosystem. Taking the broadleaved Korean pine forest on Changbai Mountain as an example, this paper constructed a mechanism model for estimating forest evapotranspiration, based on the aerodynamic principle and energy balance equation. Using the data measured by the Routine Meteorological Measurement System and Open-Path Eddy Covariance Measurement System mounted on the tower in the broadleaved Korean pine forest, the parameters displacement height d, stability functions for momentum phi m, and stability functions for heat phi h were ascertained. The displacement height of the study site was equal to 17.8 m, near to the mean canopy height, and the functions of phi m and phi h changing with gradient Richarson number R i were constructed.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Relevance Theory as model for analysing visual and multimodal communication
C. Forceville
2014-01-01
Elaborating on my earlier work (Forceville 1996: chapter 5, 2005, 2009; see also Yus 2008), I will here sketch how discussions of visual and multimodal discourse can be embedded in a more general theory of communication and cognition: Sperber and Wilson’s Relevance Theory/RT (Sperber and Wilson 1986
G. Iordanou
2011-10-01
Full Text Available This work describes the developed of a lumped parameter model and demonstrates its practical application. The lumped parameter mathematical model is a useful instrument to be used for rapid determination of design dimensions and operational performance of solar collectors at the designing stage. Such model which incorporates data from relevant Computational Fluid Dynamics design and experimental investigations can provide an acceptable accuracy in predictions and can be used as an effective design tool. A computer algorithm validates the lumped parameter model via a window environment program.
Transfer function modeling of damping mechanisms in distributed parameter models
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
On the modeling of internal parameters in hyperelastic biological materials
Giantesio, Giulia
2016-01-01
This paper concerns the behavior of hyperelastic energies depending on an internal parameter. First, the situation in which the internal parameter is a function of the gradient of the deformation is presented. Second, two models where the parameter describes the activation of skeletal muscle tissue are analyzed. In those models, the activation parameter depends on the strain and it is important to consider the derivative of the parameter with respect to the strain in order to capture the proper behavior of the stress.
Determining extreme parameter correlation in ground water models
Hill, Mary Cole; Østerby, Ole
2003-01-01
In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation...... correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters...
Sadiqi, Said; Verlaan, Jorrit Jan; Lehr, A. M.; Dvorak, Marcel F.; Kandziora, Frank; Rajasekaran, S.; Schnake, Klaus J.; Vaccaro, Alexander R.; Oner, F. C.
2016-01-01
STUDY DESIGN.: International web-based survey OBJECTIVE.: To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. SUMMARY OF BACKGROUND DATA.: While an outcome instrument
Sadiqi, Said; Verlaan, Jorrit Jan; Lehr, A. M.; Dvorak, Marcel F.; Kandziora, Frank; Rajasekaran, S.; Schnake, Klaus J.; Vaccaro, Alexander R.; Oner, F. C.
2016-01-01
STUDY DESIGN.: International web-based survey OBJECTIVE.: To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. SUMMARY OF BACKGROUND DATA.: While an outcome instrument
Model comparisons and genetic and environmental parameter ...
arc
South African Journal of Animal Science 2005, 35 (1) ... Genetic and environmental parameters were estimated for pre- and post-weaning average daily gain ..... and BWT (and medium maternal genetic correlations) indicates that these traits ...
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Extracting the relevant delays in time series modelling
Goutte, Cyril
1997-01-01
selection, and more precisely stepwise forward selection. The method is compared to other forward selection schemes, as well as to a nonparametric tests aimed at estimating the embedding dimension of time series. The final application extends these results to the efficient estimation of FIR filters on some......In this contribution, we suggest a convenient way to use generalisation error to extract the relevant delays from a time-varying process, i.e. the delays that lead to the best prediction performance. We design a generalisation-based algorithm that takes its inspiration from traditional variable...
Parameter optimization model in electrical discharge machining process
无
2008-01-01
Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.
Sensitivity of a Shallow-Water Model to Parameters
Kazantsev, Eugene
2011-01-01
An adjoint based technique is applied to a shallow water model in order to estimate the influence of the model's parameters on the solution. Among parameters the bottom topography, initial conditions, boundary conditions on rigid boundaries, viscosity coefficients Coriolis parameter and the amplitude of the wind stress tension are considered. Their influence is analyzed from three points of view: 1. flexibility of the model with respect to a parameter that is related to the lowest value of the cost function that can be obtained in the data assimilation experiment that controls this parameter; 2. possibility to improve the model by the parameter's control, i.e. whether the solution with the optimal parameter remains close to observations after the end of control; 3. sensitivity of the model solution to the parameter in a classical sense. That implies the analysis of the sensitivity estimates and their comparison with each other and with the local Lyapunov exponents that characterize the sensitivity of the mode...
Estimation of shape model parameters for 3D surfaces
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Compositional modelling of distributed-parameter systems
Maschke, Bernhard; Schaft, van der Arjan; Lamnabhi-Lagarrigue, F.; Loría, A.; Panteley, E.
2005-01-01
The Hamiltonian formulation of distributed-parameter systems has been a challenging reserach area for quite some time. (A nice introduction, especially with respect to systems stemming from fluid dynamics, can be found in [26], where also a historical account is provided.) The identification of the
Parameter Estimation and Experimental Design in Groundwater Modeling
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
Relevance of animal models to human tardive dyskinesia
Blanchet Pierre J
2012-03-01
Full Text Available Abstract Tardive dyskinesia remains an elusive and significant clinical entity that can possibly be understood via experimentation with animal models. We conducted a literature review on tardive dyskinesia modeling. Subchronic antipsychotic drug exposure is a standard approach to model tardive dyskinesia in rodents. Vacuous chewing movements constitute the most common pattern of expression of purposeless oral movements and represent an impermanent response, with individual and strain susceptibility differences. Transgenic mice are also used to address the contribution of adaptive and maladaptive signals induced during antipsychotic drug exposure. An emphasis on non-human primate modeling is proposed, and past experimental observations reviewed in various monkey species. Rodent and primate models are complementary, but the non-human primate model appears more convincingly similar to the human condition and better suited to address therapeutic issues against tardive dyskinesia.
Theoretical Relevance of Neuropsychological Data for Connectionist Modelling
Mauricio Iza
2011-05-01
Full Text Available The symbolic information-processing paradigm in cognitive psychology has met a growing challenge from neural network models over the past two decades. While neuropsychological
evidence has been of great utility to theories concerned with information processing, the real question is, whether the less rigid connectionist models provide valid, or enough, information
concerning complex cognitive structures. In this work, we will discuss the theoretical implications that neuropsychological data posits for modelling cognitive systems.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
A review of models relevant to road safety.
Hughes, B P; Newstead, S; Anund, A; Shu, C C; Falkmer, T
2015-01-01
It is estimated that more than 1.2 million people die worldwide as a result of road traffic crashes and some 50 million are injured per annum. At present some Western countries' road safety strategies and countermeasures claim to have developed into 'Safe Systems' models to address the effects of road related crashes. Well-constructed models encourage effective strategies to improve road safety. This review aimed to identify and summarise concise descriptions, or 'models' of safety. The review covers information from a wide variety of fields and contexts including transport, occupational safety, food industry, education, construction and health. The information from 2620 candidate references were selected and summarised in 121 examples of different types of model and contents. The language of safety models and systems was found to be inconsistent. Each model provided additional information regarding style, purpose, complexity and diversity. In total, seven types of models were identified. The categorisation of models was done on a high level with a variation of details in each group and without a complete, simple and rational description. The models identified in this review are likely to be adaptable to road safety and some of them have previously been used. None of systems theory, safety management systems, the risk management approach, or safety culture was commonly or thoroughly applied to road safety. It is concluded that these approaches have the potential to reduce road trauma.
Relevance of a Managerial Decision-Model to Educational Administration.
Lundin, Edward.; Welty, Gordon
The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…
Argaud Jean-Philippe
2015-01-01
Full Text Available The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. From these studies, we present some results collected from the neutronic simulation of nuclear power plants. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach.
Extraction of the relevant delays for temporal modeling
Goutte, Cyril
2000-01-01
When modeling temporal processes, just like in pattern recognition, selecting the optimal number of inputs is of central concern. We take advantage of specific features of temporal modeling to propose a novel method for extracting the inputs that attempts to yield the best predictive performance....... The method relies on the use of estimators of the generalization error to assess the predictive performance of the model. This technique is first applied to time series processing, where we perform a number of experiments on synthetic data, as well as a real life dataset, and compare the results...
Parlitz, Ulrich; Luther, Stefan
2015-01-01
Features of the Jacobian matrix of the delay coordinates map are exploited for quantifying the robustness and reliability of state and parameter estimations for a given dynamical model using an observed time series. Relevant concepts of this approach are introduced and illustrated for discrete and continuous time systems employing a filtered H\\'enon map and a R\\"ossler system.
M.D. de Pooter (Michiel); F. Ravazzolo (Francesco); D.J.C. van Dijk (Dick)
2007-01-01
textabstractWe forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation
Parameter and Uncertainty Estimation in Groundwater Modelling
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling
Parameter redundancy in discrete state‐space and integrated models
McCrea, Rachel S.
2016-01-01
Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An automatic and effective parameter optimization method for model tuning
T. Zhang
2015-11-01
simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Ternary interaction parameters in calphad solution models
Eleno, Luiz T.F., E-mail: luizeleno@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica; Schön, Claudio G., E-mail: schoen@usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Computational Materials Science Laboratory. Department of Metallurgical and Materials Engineering
2014-07-01
For random, diluted, multicomponent solutions, the excess chemical potentials can be expanded in power series of the composition, with coefficients that are pressure- and temperature-dependent. For a binary system, this approach is equivalent to using polynomial truncated expansions, such as the Redlich-Kister series for describing integral thermodynamic quantities. For ternary systems, an equivalent expansion of the excess chemical potentials clearly justifies the inclusion of ternary interaction parameters, which arise naturally in the form of correction terms in higher-order power expansions. To demonstrate this, we carry out truncated polynomial expansions of the excess chemical potential up to the sixth power of the composition variables. (author)
Macroscale hydrologic modeling of ecologically relevant flow metrics
Seth J. Wenger; Charles H. Luce; Alan F. Hamlet; Daniel J. Isaak; Helen M. Neville
2010-01-01
Stream hydrology strongly affects the structure of aquatic communities. Changes to air temperature and precipitation driven by increased greenhouse gas concentrations are shifting timing and volume of streamflows potentially affecting these communities. The variable infiltration capacity (VIC) macroscale hydrologic model has been employed at regional scales to describe...
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
GIS-Based Hydrogeological-Parameter Modeling
无
2000-01-01
A regression model is proposed to relate the variation of water well depth with topographic properties (area and slope), the variation of hydraulic conductivity and vertical decay factor. The implementation of this model in GIS environment (ARC/TNFO) based on known water data and DEM is used to estimate the variation of hydraulic conductivity and decay factor of different lithoiogy units in watershed context.
Safety-relevant mode confusions-modelling and reducing them
Bredereke, Jan [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)]. E-mail: brederek@tzi.de; Lankenau, Axel [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)
2005-06-01
Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of 'mode' and 'mode confusion' for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot.
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
An ecologically relevant guinea pig model of fetal behavior.
Bellinger, S A; Lucas, D; Kleven, G A
2015-04-15
The laboratory guinea pig, Cavia porcellus, shares with humans many similarities during pregnancy and prenatal development, including precocial offspring and social dependence. These similarities suggest the guinea pig as a promising model of fetal behavioral development as well. Using innovative methods of behavioral acclimation, fetal offspring of female IAF hairless guinea pigs time mated to NIH multicolored Hartley males were observed longitudinally without restraint using noninvasive ultrasound at weekly intervals across the 10 week gestation. To ensure that the ultrasound procedure did not cause significant stress, salivary cortisol was collected both before and after each observation. Measures of fetal spontaneous movement and behavioral state were quantified from video recordings from week 3 through the last week before birth. Results from prenatal quantification of Interlimb Movement Synchrony and state organization reveal guinea pig fetal development to be strikingly similar to that previously reported for other rodents and preterm human infants. Salivary cortisol readings taken before and after sonography did not differ at any observation time point. These results suggest this model holds translational promise for studying the prenatal mechanisms of neurobehavioral development, including those that may result from adverse events. Because the guinea pig is a highly social mammal with a wide range of socially oriented vocalizations, this model may also have utility for studying the prenatal origins and trajectories of developmental disabilities with social-emotional components, such as autism.
Mirror symmetry for two parameter models, 2
Candelas, Philip; Katz, S; Morrison, Douglas Robert Ogston; Philip Candelas; Anamaria Font; Sheldon Katz; David R Morrison
1994-01-01
We describe in detail the space of the two K\\"ahler parameters of the Calabi--Yau manifold \\P_4^{(1,1,1,6,9)}[18] by exploiting mirror symmetry. The large complex structure limit of the mirror, which corresponds to the classical large radius limit, is found by studying the monodromy of the periods about the discriminant locus, the boundary of the moduli space corresponding to singular Calabi--Yau manifolds. A symplectic basis of periods is found and the action of the Sp(6,\\Z) generators of the modular group is determined. From the mirror map we compute the instanton expansion of the Yukawa couplings and the generalized N=2 index, arriving at the numbers of instantons of genus zero and genus one of each degree. We also investigate an SL(2,\\Z) symmetry that acts on a boundary of the moduli space.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
On linear models and parameter identifiability in experimental biological systems.
Lamberton, Timothy O; Condon, Nicholas D; Stow, Jennifer L; Hamilton, Nicholas A
2014-10-07
A key problem in the biological sciences is to be able to reliably estimate model parameters from experimental data. This is the well-known problem of parameter identifiability. Here, methods are developed for biologists and other modelers to design optimal experiments to ensure parameter identifiability at a structural level. The main results of the paper are to provide a general methodology for extracting parameters of linear models from an experimentally measured scalar function - the transfer function - and a framework for the identifiability analysis of complex model structures using linked models. Linked models are composed by letting the output of one model become the input to another model which is then experimentally measured. The linked model framework is shown to be applicable to designing experiments to identify the measured sub-model and recover the input from the unmeasured sub-model, even in cases that the unmeasured sub-model is not identifiable. Applications for a set of common model features are demonstrated, and the results combined in an example application to a real-world experimental system. These applications emphasize the insight into answering "where to measure" and "which experimental scheme" questions provided by both the parameter extraction methodology and the linked model framework. The aim is to demonstrate the tools' usefulness in guiding experimental design to maximize parameter information obtained, based on the model structure.
Spatial variability of the parameters of a semi-distributed hydrological model
de Lavenne, Alban; Thirel, Guillaume; Andréassian, Vazken; Perrin, Charles; Ramos, Maria-Helena
2016-05-01
Ideally, semi-distributed hydrologic models should provide better streamflow simulations than lumped models, along with spatially-relevant water resources management solutions. However, the spatial distribution of model parameters raises issues related to the calibration strategy and to the identifiability of the parameters. To analyse these issues, we propose to base the evaluation of a semi-distributed model not only on its performance at streamflow gauging stations, but also on the spatial and temporal pattern of the optimised value of its parameters. We implemented calibration over 21 rolling periods and 64 catchments, and we analysed how well each parameter is identified in time and space. Performance and parameter identifiability are analysed comparatively to the calibration of the lumped version of the same model. We show that the semi-distributed model faces more difficulties to identify stable optimal parameter sets. The main difficulty lies in the identification of the parameters responsible for the closure of the water balance (i.e. for the particular model investigated, the intercatchment groundwater flow parameter).
CHAMP: Changepoint Detection Using Approximate Model Parameters
2014-06-01
positions as a Markov chain in which the transition probabilities are defined by the time since the last changepoint: p(τi+1 = t|τi = s) = g(t− s), (1...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models (HMMs) are...length α, and maximum number of particles M . Output: Viterbi path of changepoint times and models // Initialize data structures 1: max path, prev queue
Relevant Aspects in Modeling of Micro-injection Molding
Nguyen-Chung, Tham; Jüttner, Gábor; Pham, Tung; Mennig, Günter
2008-07-01
Growing demands in the manufacturing of micro and precision components in plastics require new concepts for molding machines and micro molds on the one hand. On the other hand, a deeper understanding of the filling and solidification process in a micro mold is indispensable. In this work, the filling process of a micro spiral was analyzed by modeling the compressible flow using pressure dependent viscosity and adjusted heat transfer coefficients. At the same time, experimental filling studies were carried out on an accurately controlled micro-injection molding machine. Based on the relationship between the injection pressure and the filling degree, essential factors for the quality of the simulation can be identified. It can be shown that the flow behavior of the melt in a micro cavity of high aspect ratio is extremely dependent on the melt compressibility in the injection cylinder which needs to be considered in the simulation in order to predict an accurate flow rate. Moreover, the heat transfer coefficients between the melt and the mold wall vary significantly when changing cavity thickness and processing conditions. It is believed that a pressure dependent model for the heat transfer coefficient would be able to improve the quality of the process simulation.
WINKLER'S SINGLE-PARAMETER SUBGRADE MODEL FROM ...
Preferred Customer
[3, 9]. However, mainly due to the simplicity of Winkler's model in practical applications and .... this case, the coefficient B takes the dimension of a ... In plane-strain problems, the assumption of ... loaded circular region; s is the radial coordinate.
Generalized Potts-Models and their Relevance for Gauge Theories
Andreas Wipf
2007-01-01
Full Text Available We study the Polyakov loop dynamics originating from finite-temperature Yang-Mills theory. The effective actions contain center-symmetric terms involving powers of the Polyakov loop, each with its own coupling. For a subclass with two couplings we perform a detailed analysis of the statistical mechanics involved. To this end we employ a modified mean field approximation and Monte Carlo simulations based on a novel cluster algorithm. We find excellent agreement of both approaches. The phase diagram exhibits both first and second order transitions between symmetric, ferromagnetic and antiferromagnetic phases with phase boundaries merging at three tricritical points. The critical exponents ν and γ at the continuous transition between symmetric and antiferromagnetic phases are the same as for the 3-state spin Potts model.
Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models
Bates, Bryson C.
1992-01-01
A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
On retrial queueing model with fuzzy parameters
Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng
2007-01-01
This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.
Solar parameters for modeling interplanetary background
Bzowski, M; Tokumaru, M; Fujiki, K; Quemerais, E; Lallement, R; Ferron, S; Bochsler, P; McComas, D J
2011-01-01
The goal of the Fully Online Datacenter of Ultraviolet Emissions (FONDUE) Working Team of the International Space Science Institute in Bern, Switzerland, was to establish a common calibration of various UV and EUV heliospheric observations, both spectroscopic and photometric. Realization of this goal required an up-to-date model of spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the solar factors shaping the distribution of neutral interstellar H in the heliosphere. Presented are the solar Lyman-alpha flux and the solar Lyman-alpha resonant radiation pressure force acting on neutral H atoms in the heliosphere, solar EUV radiation and the photoionization of heliospheric hydrogen, and their evolution in time and the still hypothetical variation with heliolatitude. Further, solar wind and its evolution with solar activity is presented in the context of the charge excha...
Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models
Hori, Kentaro
2013-01-01
We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...
Dust from AGBs: relevant factors and modelling uncertainties
Ventura, P; Schneider, R; Di Criscienzo, M; Rossi, C; La Franca, F; Gallerani, S; Valiante, R
2014-01-01
The dust formation process in the winds of Asymptotic Giant Branch stars is discussed, based on full evolutionary models of stars with mass in the range $1$M$_{\\odot} \\leq$M$\\leq 8$M$_{\\odot}$, and metallicities $0.001 < Z <0.008$. Dust grains are assumed to form in an isotropically expanding wind, by growth of pre--existing seed nuclei. Convection, for what concerns the treatment of convective borders and the efficiency of the schematization adopted, turns out to be the physical ingredient used to calculate the evolutionary sequences with the highest impact on the results obtained. Low--mass stars with M$\\leq 3$M$_{\\odot}$ produce carbon type dust with also traces of silicon carbide. The mass of solid carbon formed, fairly independently of metallicity, ranges from a few $10^{-4}$M$_{\\odot}$, for stars of initial mass $1-1.5$M$_{\\odot}$, to $\\sim 10^{-2}$M$_{\\odot}$ for M$\\sim 2-2.5$M$_{\\odot}$; the size of dust particles is in the range $0.1 \\mu$m$\\leq a_C \\leq 0.2\\mu$m. On the contrary, the production...
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; ten Brummelhuis, P.G.J.; ten Brummelhuis, Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The
Exploring the interdependencies between parameters in a material model.
Silling, Stewart Andrew; Fermen-Coker, Muge
2014-01-01
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
An Alternative Three-Parameter Logistic Item Response Model.
Pashley, Peter J.
Birnbaum's three-parameter logistic function has become a common basis for item response theory modeling, especially within situations where significant guessing behavior is evident. This model is formed through a linear transformation of the two-parameter logistic function in order to facilitate a lower asymptote. This paper discusses an…
Parameter identification in tidal models with uncertain boundaries
Bagchi, Arunabha; Brummelhuis, ten Paul
1994-01-01
In this paper we consider a simultaneous state and parameter estimation procedure for tidal models with random inputs, which is formulated as a minimization problem. It is assumed that some model parameters are unknown and that the random noise inputs only act upon the open boundaries. The hyperboli
A compact cyclic plasticity model with parameter evolution
Krenk, Steen; Tidemann, L.
2017-01-01
, and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...
Regionalization of SWAT Model Parameters for Use in Ungauged Watersheds
Indrajeet Chaubey
2010-11-01
Full Text Available There has been a steady shift towards modeling and model-based approaches as primary methods of assessing watershed response to hydrologic inputs and land management, and of quantifying watershed-wide best management practice (BMP effectiveness. Watershed models often require some degree of calibration and validation to achieve adequate watershed and therefore BMP representation. This is, however, only possible for gauged watersheds. There are many watersheds for which there are very little or no monitoring data available, thus the question as to whether it would be possible to extend and/or generalize model parameters obtained through calibration of gauged watersheds to ungauged watersheds within the same region. This study explored the possibility of developing regionalized model parameter sets for use in ungauged watersheds. The study evaluated two regionalization methods: global averaging, and regression-based parameters, on the SWAT model using data from priority watersheds in Arkansas. Resulting parameters were tested and model performance determined on three gauged watersheds. Nash-Sutcliffe efficiencies (NS for stream flow obtained using regression-based parameters (0.53–0.83 compared well with corresponding values obtained through model calibration (0.45–0.90. Model performance obtained using global averaged parameter values was also generally acceptable (0.4 ≤ NS ≤ 0.75. Results from this study indicate that regionalized parameter sets for the SWAT model can be obtained and used for making satisfactory hydrologic response predictions in ungauged watersheds.
Imaging of a clinically relevant stroke model: glucose hypermetabolism revisited.
Arnberg, Fabian; Grafström, Jonas; Lundberg, Johan; Nikkhou-Aski, Sahar; Little, Philip; Damberg, Peter; Mitsios, Nicholas; Mulder, Jan; Lu, Li; Söderman, Michael; Stone-Elander, Sharon; Holmin, Staffan
2015-03-01
Ischemic stroke has been shown to cause hypermetabolism of glucose in the ischemic penumbra. Experimental and clinical data indicate that infarct-related systemic hyperglycemia is a potential therapeutic target in acute stroke. However, clinical studies aiming for glucose control in acute stroke have neither improved functional outcome nor reduced mortality. Thus, further studies on glucose metabolism in the ischemic brain are warranted. We used a rat model of stroke that preserves collateral flow. The animals were analyzed by [2-(18)F]-2-fluoro-2-deoxy-d-glucose positron emission tomography or magnetic resonance imaging during 90-minute occlusion of the middle cerebral artery and during 60 minutes after reperfusion. Results were correlated to magnetic resonance imaging of cerebral blood flow, diffusion of water, lactate formation, and histological data on cell death and blood-brain barrier breakdown. We detected an increased [2-(18)F]-2-fluoro-2-deoxy-d-glucose uptake within ischemic regions succumbing to infarction and in the peri-infarct region. Magnetic resonance imaging revealed impairment of blood flow to ischemic levels in the infarct and a reduction of cerebral blood flow in the peri-infarct region. Magnetic resonance spectroscopy revealed lactate in the ischemic region and absence of lactate in the peri-infarct region. Immunohistochemical analyses revealed apoptosis and blood-brain barrier breakdown within the infarct. The increased uptake of [2-(18)F]-2-fluoro-2-deoxy-d-glucose in cerebral ischemia most likely reflects hypermetabolism of glucose meeting increased energy needs of ischemic and hypoperfused brain tissue, and it occurs under both anaerobic and aerobic conditions measured by local lactate production. Infarct-related systemic hyperglycemia could serve to facilitate glucose supply to the ischemic brain. Glycemic control by insulin treatment could negatively influence this mechanism. © 2015 American Heart Association, Inc.
NWP model forecast skill optimization via closure parameter variations
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Bayesian estimation of parameters in a regional hydrological model
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Bulitta, Jurgen B; Landersdorfer, Cornelia B; Forrest, Alan; Brown, Silvia V; Neely, Michael N; Tsuji, Brian T; Louie, Arnold
2011-12-01
Efficacious therapy is of utmost importance to save lives and prevent bacterial resistance in critically ill patients. This review summarizes pharmacokinetic (PK) and pharmacodynamic (PD) modeling methods to optimize clinical care of critically ill patients in empiric and individualized therapy. While these methods apply to all therapeutic areas, we focus on antibiotics to highlight important applications, as emergence of resistance is a significant problem. Nonparametric and parametric population PK modeling, multiple-model dosage design, Monte Carlo simulations, and Bayesian adaptive feedback control are the methods of choice to optimize therapy. Population PK can estimate between patient variability and account for potentially increased clearances and large volumes of distribution in critically ill patients. Once patient- specific PK data become available, target concentration intervention and adaptive feedback control algorithms can most precisely achieve target goals such as clinical cure of an infection or resistance prevention in stable and unstable patients with rapidly changing PK parameters. Many bacterial resistance mechanisms cause PK/PD targets for resistance prevention to be usually several-fold higher than targets for near-maximal killing. In vitro infection models such as the hollow fiber and one-compartment infection models allow one to study antibiotic-induced bacterial killing and emergence of resistance of mono- and combination therapies over clinically relevant treatment durations. Mechanism-based (and empirical) PK/PD modeling can incorporate effects of the immune system and allow one to design innovative dosage regimens and prospective validation studies. Mechanism-based modeling holds great promise to optimize mono- and combination therapy of anti-infectives and drugs from other therapeutic areas for critically ill patients.
Toxicity of food-relevant nanoparticles in intestinal epithelial models
McCracken, Christie
Nanoparticles are increasingly being incorporated into common consumer products, including in foods and food packaging, for their unique properties at the nanoscale. Food-grade silica and titania are used as anti-caking and whitening agents, respectively, and these particle size distributions are composed of approximately one-third nanoparticles. Zinc oxide and silver nanoparticles can be used for their antimicrobial properties. However, little is known about the interactions of nanoparticles in the body upon ingestion. This study was performed to investigate the role of nanoparticle characteristics including surface chemistry, dissolution, and material type on toxicity to the intestinal epithelium. Only mild acute toxicity of zinc oxide nanoparticles was observed after 24-hour treatment of intestinal epithelial C2BBe1 cells based on the results of toxicity assays measuring necrosis, apoptosis, membrane damage, and mitochondrial activity. Silica and titanium dioxide nanoparticles were not observed to be toxic although all nanoparticles were internalized by cells. In vitro digestion of nanoparticles in solutions representing the stomach and intestines prior to treatment of cells did not alter nanoparticle toxicity. Long-term repeated treatment of cells weekly for 24 hours with nanoparticles did not change nanoparticle cytotoxicity or the growth rate of the treated cell populations. Thus, silica, titanium dioxide, and zinc oxide nanoparticles were found to induce little toxicity in intestinal epithelial cells. Fluorescent silica nanoparticles were synthesized as a model for silica used in foods that could be tracked in vitro and in vivo. To maintain an exterior of pure silica, a silica shell was hydrolyzed around a core particle of quantum dots or a fluorescent dye electrostatically associated with a commercial silica particle. The quantum dots used were optimized from a previously reported microwave quantum dot synthesis to a quantum yield of 40%. Characterization
Ultrafast Structural Dynamics in Combustion Relevant Model Systems
Weber, Peter M. [Brown University
2014-03-31
The research project explored the time resolved structural dynamics of important model reaction system using an array of novel methods that were developed specifically for this purpose. They include time resolved electron diffraction, time resolved relativistic electron diffraction, and time resolved Rydberg fingerprint spectroscopy. Toward the end of the funding period, we also developed time-resolved x-ray diffraction, which uses ultrafast x-ray pulses at LCLS. Those experiments are just now blossoming, as the funding period expired. In the following, the time resolved Rydberg Fingerprint Spectroscopy is discussed in some detail, as it has been a very productive method. The binding energy of an electron in a Rydberg state, that is, the energy difference between the Rydberg level and the ground state of the molecular ion, has been found to be a uniquely powerful tool to characterize the molecular structure. To rationalize the structure sensitivity we invoke a picture from electron diffraction: when it passes the molecular ion core, the Rydberg electron experiences a phase shift compared to an electron in a hydrogen atom. This phase shift requires an adjustment of the binding energy of the electron, which is measurable. As in electron diffraction, the phase shift depends on the molecular, geometrical structure, so that a measurement of the electron binding energy can be interpreted as a measurement of the molecule’s structure. Building on this insight, we have developed a structurally sensitive spectroscopy: the molecule is first elevated to the Rydberg state, and the binding energy is then measured using photoelectron spectroscopy. The molecule’s structure is read out as the binding energy spectrum. Since the photoionization can be done with ultrafast laser pulses, the technique is inherently capable of a time resolution in the femtosecond regime. For the purpose of identifying the structures of molecules during chemical reactions, and for the analysis of
Ultrafast Structural Dynamics in Combustion Relevant Model Systems
Weber, Peter M. [Brown University
2014-03-31
The research project explored the time resolved structural dynamics of important model reaction system using an array of novel methods that were developed specifically for this purpose. They include time resolved electron diffraction, time resolved relativistic electron diffraction, and time resolved Rydberg fingerprint spectroscopy. Toward the end of the funding period, we also developed time-resolved x-ray diffraction, which uses ultrafast x-ray pulses at LCLS. Those experiments are just now blossoming, as the funding period expired. In the following, the time resolved Rydberg Fingerprint Spectroscopy is discussed in some detail, as it has been a very productive method. The binding energy of an electron in a Rydberg state, that is, the energy difference between the Rydberg level and the ground state of the molecular ion, has been found to be a uniquely powerful tool to characterize the molecular structure. To rationalize the structure sensitivity we invoke a picture from electron diffraction: when it passes the molecular ion core, the Rydberg electron experiences a phase shift compared to an electron in a hydrogen atom. This phase shift requires an adjustment of the binding energy of the electron, which is measurable. As in electron diffraction, the phase shift depends on the molecular, geometrical structure, so that a measurement of the electron binding energy can be interpreted as a measurement of the molecule’s structure. Building on this insight, we have developed a structurally sensitive spectroscopy: the molecule is first elevated to the Rydberg state, and the binding energy is then measured using photoelectron spectroscopy. The molecule’s structure is read out as the binding energy spectrum. Since the photoionization can be done with ultrafast laser pulses, the technique is inherently capable of a time resolution in the femtosecond regime. For the purpose of identifying the structures of molecules during chemical reactions, and for the analysis of
Some tests for parameter constancy in cointegrated VAR-models
Hansen, Henrik; Johansen, Søren
1999-01-01
Some methods for the evaluation of parameter constancy in vector autoregressive (VAR) models are discussed. Two different ways of re-estimating the VAR model are proposed; one in which all parameters are estimated recursively based upon the likelihood function for the first observations, and anot...... be applied to test the constancy of the long-run parameters in the cointegrated VAR-model. All results are illustrated using a model for the term structure of interest rates on US Treasury securities. ...
Baker Syed
2011-01-01
Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-10-11
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Weigand, M.; Kemna, A.
2016-06-01
Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.
Identification of hydrological model parameter variation using ensemble Kalman filter
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Universally sloppy parameter sensitivities in systems biology models.
Ryan N Gutenkunst
2007-10-01
Full Text Available Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Guanqun eZhang
2011-11-01
Full Text Available A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel while being defined by only a few parameters (unlike comprehensive distributed-parameter models. As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Parameter estimation and investigation of a bolted joint model
Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.
2007-11-01
Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.
Modeling and Parameter Estimation of a Small Wind Generation System
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Parameter estimation of hidden periodic model in random fields
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Identification of parameters of discrete-continuous models
Cekus, Dawid, E-mail: cekus@imipkm.pcz.pl; Warys, Pawel, E-mail: warys@imipkm.pcz.pl [Institute of Mechanics and Machine Design Foundations, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Czestochowa (Poland)
2015-03-10
In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.
Estimating parameters for generalized mass action models with connectivity information
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
Present status on atomic and molecular data relevant to fusion plasma diagnostics and modeling
Tawara, H. [ed.
1997-01-01
This issue is the collection of the paper presented status on atomic and molecular data relevant to fusion plasma diagnostics and modeling. The 10 of the presented papers are indexed individually. (J.P.N.)
Particle model of full-size ITER-relevant negative ion source.
Taccogna, F; Minelli, P; Ippolito, N
2016-02-01
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Particle model of full-size ITER-relevant negative ion source
Taccogna, F., E-mail: francesco.taccogna@nanotec.cnr.it; Minelli, P. [CNR-Nanotec, Bari 70126 (Italy); INFN, Bari 70126 (Italy); Ippolito, N. [INFN, Bari 70126 (Italy)
2016-02-15
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j{sub H{sup −}} = 660 A/m{sup 2} from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Particle model of full-size ITER-relevant negative ion source
Taccogna, F.; Minelli, P.; Ippolito, N.
2016-02-01
This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of jH- = 660 A/m2 from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.
Gefen, Amit
2010-02-01
The extrapolation of biological damage from a biomechanical model requires that a closed-form mathematical damage threshold function (DTF) be included in the model. A DTF typically includes a generic load variable, being the critical load (e.g., pressure, strain, temperature) causing irreversible tissue or cell damage, and a generic time variable, which represents the exposure to the load (e.g., duration, strain rate). Despite the central role that DTFs play in biomechanical studies, there is no coherent literature on how to formulate a DTF, excluding the field of heat-induced damage studies. This technical note describes six mathematical function types (Richards, Boltzmann, Morgan-Mercer-Flodin, Gompertz, Weibull, Bertalanffy) that are suitable for formulating a wide range of DTFs. These functions were adapted from the theory of restricted growth, and were fitted herein to describe biomechanical damage phenomena. Relevant properties of each adapted function type were extracted to allow efficient fitting of its parameters to empirical biomechanical data, and some practical examples are provided.
Towards predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Essa, Mohamed; Sayed, Tarek
2015-11-01
Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as
Giacomozzi, Claudia; Stebbins, Julie A
2017-03-01
Plantar pressure analysis is widely used in the assessment of foot function. In order to assess regional loading, a mask is applied to the footprint to sub-divide it into regions of interest (ROIs). The most common masking method is based on geometric features of the footprint (GM). Footprint masking based on anatomical landmarks of the foot has been implemented more recently, and involves the integration of a 3D motion capture system, plantar pressure measurement device, and a multi-segment foot model. However, thorough validation of anatomical masking (AM) using pathological footprints has not yet been presented. In the present study, an AM method based on the Oxford Foot Model (OFM) was compared to an equivalent GM. Pressure footprints from 20 young healthy subjects (HG) and 20 patients with clubfoot (CF) were anatomically divided into 5 ROIs using a subset of the OFM markers. The same foot regions were also identified by using a standard GM method. Comparisons of intra-subject coefficient of variation (CV) showed that the OFM-based AM was at least as reliable as the GM for all investigated pressure parameters in all foot regions. Clinical relevance of AM was investigated by comparing footprints from HG and CF groups. Contact time, maximum force, force-time integral and contact area proved to be sensitive parameters that were able to distinguish HG and CF groups, using both AM and GM methods However, the AM method revealed statistically significant differences between groups in 75% of measured variables, compared to 62% using a standard GM method, indicating that the AM method is more sensitive for revealing differences between groups. Copyright © 2017 Elsevier B.V. All rights reserved.
An automatic and effective parameter optimization method for model tuning
T. Zhang
2015-05-01
Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
Optimal parameters for the FFA-Beddoes dynamic stall model
Bjoerck, A.; Mert, M. [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H.A. [Risoe National Lab., Roskilde (Denmark)
1999-03-01
Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)
Pinnapureddy, Ashish R; Stayner, Cherie; McEwan, John; Baddeley, Olivia; Forman, John; Eccles, Michael R
2015-09-02
Animals that accurately model human disease are invaluable in medical research, allowing a critical understanding of disease mechanisms, and the opportunity to evaluate the effect of therapeutic compounds in pre-clinical studies. Many types of animal models are used world-wide, with the most common being small laboratory animals, such as mice. However, rodents often do not faithfully replicate human disease, despite their predominant use in research. This discordancy is due in part to physiological differences, such as body size and longevity. In contrast, large animal models, including sheep, provide an alternative to mice for biomedical research due to their greater physiological parallels with humans. Completion of the full genome sequences of many species, and the advent of Next Generation Sequencing (NGS) technologies, means it is now feasible to screen large populations of domesticated animals for genetic variants that resemble human genetic diseases, and generate models that more accurately model rare human pathologies. In this review, we discuss the notion of using sheep as large animal models, and their advantages in modelling human genetic disease. We exemplify several existing naturally occurring ovine variants in genes that are orthologous to human disease genes, such as the Cln6 sheep model for Batten disease. These, and other sheep models, have contributed significantly to our understanding of the relevant human disease process, in addition to providing opportunities to trial new therapies in animals with similar body and organ size to humans. Therefore sheep are a significant species with respect to the modelling of rare genetic human disease, which we summarize in this review.
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
Andersen, Lars
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....
A New Approach for Parameter Optimization in Land Surface Model
LI Hongqi; GUO Weidong; SUN Guodong; ZHANG Yaocun; FU Congbin
2011-01-01
In this study,a new parameter optimization method was used to investigate the expansion of conditional nonlinear optimal perturbation (CNOP) in a land surface model (LSM) using long-term enhanced field observations at Tongyn station in Jilin Province,China,combined with a sophisticated LSM (common land model,CoLM).Tongyu station is a reference site of the international Coordinated Energy and Water Cycle Observations Project (CEOP) that has studied semiarid regions that have undergone desertification,salination,and degradation since late 1960s.In this study,three key land-surface parameters,namely,soil color,proportion of sand or clay in soil,and leaf-area index were chosen as parameters to be optimized.Our study comprised three experiments:First,a single-parameter optimization was performed,while the second and third experiments performed triple- and six-parameter optinizations,respectively.Notable improvements in simulating sensible heat flux (SH),latent heat flux (LH),soil temperature (TS),and moisture (MS) at shallow layers were achieved using the optimized parameters.The multiple-parameter optimization experiments performed better than the single-parameter experminent.All results demonstrate that the CNOP method can be used to optimize expanded parameters in an LSM.Moreover,clear mathematical meaning,simple design structure,and rapid computability give this method great potential for further application to parameter optimization in LSMs.
Investigations on caesium-free alternatives for H{sup −} formation at ion source relevant parameters
Kurutz, U.; Fantz, U. [Max-Planck-Institut für Plasmaphysik, Boltzmannstr. 2, 85748 Garching (Germany); AG Experimentelle Plasmaphysik, Institut für Physik, Universität Augsburg, 86135 Augsburg (Germany)
2015-04-08
Negative hydrogen ions are efficiently produced in ion sources by the application of caesium. Due to a thereby induced lowering of the work function of a converter surface a direct conversion of impinging hydrogen atoms and positive ions into negative ions is maintained. However, due to the complex caesium chemistry and dynamics a long-term behaviour is inherent for the application of caesium that affects the stability and reliability of negative ion sources. To overcome these drawbacks caesium-free alternatives for efficient negative ion formation are investigated at the flexible laboratory setup HOMER (HOMogenous Electron cyclotron Resonance plasma). By the usage of a meshed grid the tandem principle is applied allowing for investigations on material induced negative ion formation under plasma parameters relevant for ion source operation. The effect of different sample materials on the ratio of the negative ion density to the electron density n{sub H{sup −}} /n{sub e} is compared to the effect of a stainless steel reference sample and investigated by means of laser photodetachment in a pressure range from 0.3 to 3 Pa. For the stainless steel sample no surface induced effect on the negative ion density is present and the measured negative ion densities are resulting from pure volume formation and destruction processes. In a first step the dependency of n{sub H{sup −}} /n{sub e} on the sample distance has been investigated for a caesiated stainless steel sample. At a distance of 0.5 cm at 0.3 Pa the density ratio is 3 times enhanced compared to the reference sample confirming the surface production of negative ions. In contrast for the caesium-free material samples, tantalum and tungsten, the same dependency on pressure and distance n{sub H{sup −}} /n{sub e} like for the stainless steel reference sample were obtained within the error margins: A density ratio of around 14.5% is measured at 4.5 cm sample distance and 0.3 Pa, linearly decreasing with
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
Estimating winter wheat phenological parameters: Implications for crop modeling
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
Bates, P. D.; Neal, J. C.; Fewtrell, T. J.
2012-12-01
In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formation in ind
Zee, van der F.A.
1997-01-01
This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy formati
Retrospective forecast of ETAS model with daily parameters estimate
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Dynamic Modeling and Parameter Identification of Power Systems
无
2000-01-01
@@ The generator, the excitation system, the steam turbine and speed governor, and the load are the so called four key models of power systems. Mathematical modeling and parameter identification for the four key models are of great importance as the basis for designing, operating, and analyzing power systems.
Dynamic Load Model using PSO-Based Parameter Estimation
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Parameter Estimation for the Thurstone Case III Model.
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Youlong XIA; Zong-Liang YANG; Paul L. STOFFA; Mrinal K. SEN
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI)to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing.The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes.Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Comparing spatial and temporal transferability of hydrological model parameters
Patil, Sopan D.; Stieglitz, Marc
2015-06-01
Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal aspects of catchment hydrological variability.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter estimation in stochastic rainfall-runoff models
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Transformations among CE–CVM model parameters for multicomponent systems
B Nageswara Sarma; Shrikant Lele
2005-06-01
In the development of thermodynamic databases for multicomponent systems using the cluster expansion–cluster variation methods, we need to have a consistent procedure for expressing the model parameters (CECs) of a higher order system in terms of those of the lower order subsystems and to an independent set of parameters which exclusively represent interactions of the higher order systems. Such a procedure is presented in detail in this communication. Furthermore, the details of transformations required to express the model parameters in one basis from those defined in another basis for the same system are also presented.
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package.
Tobias Houska
Full Text Available The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool, an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI. We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
Numerical modeling of piezoelectric transducers using physical parameters.
Cappon, Hans; Keesman, Karel J
2012-05-01
Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.
Parameter estimation and model selection in computational biology.
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
An Effective Parameter Screening Strategy for High Dimensional Watershed Models
Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.
2014-12-01
Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.
Estrada, Julio J.S.; Matta, Luiz E.S.C. [Instituto de Radioprotecao e Dosimetria (IRD), Rio de Janeiro, RJ (Brazil); Alves, Rex N. [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil)
1997-10-01
This work intends to discuss important parameters to be considered during the construction of a controlled radon chamber. Based on the review of different chambers, it was noticed that some characteristics such as size, shape, volume, and source activity are dependent on the chamber applications. Parameters such as aerosol generation, humidity, temperature and pressure inside the chamber are also discussed. A design of a multipurpose controlled radon chamber is suggested. (author). 10 refs., 2 figs., 1 tab.
Hamimid, M., E-mail: Hamimid_mourad@hotmail.com [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Mimoune, S.M., E-mail: s.m.mimoune@mselab.org [Laboratoire de modelisation des systemes energetiques LMSE, Universite de Biskra, BP 145, 07000 Biskra (Algeria); Feliachi, M., E-mail: mouloud.feliachi@univ-nantes.fr [IREENA-IUT, CRTT, 37 Boulevard de l' Universite, BP 406, 44602 Saint Nazaire Cedex (France)
2012-07-01
In this present work, the minor hysteresis loops model based on parameters scaling of the modified Jiles-Atherton model is evaluated by using judicious expressions. These expressions give the minor hysteresis loops parameters as a function of the major hysteresis loop ones. They have exponential form and are obtained by parameters identification using the stochastic optimization method 'simulated annealing'. The main parameters influencing the data fitting are three parameters, the pinning parameter k, the mean filed parameter {alpha} and the parameter which characterizes the shape of anhysteretic magnetization curve a. To validate this model, calculated minor hysteresis loops are compared with measured ones and good agreements are obtained.
MODELING OF FUEL SPRAY CHARACTERISTICS AND DIESEL COMBUSTION CHAMBER PARAMETERS
G. M. Kukharonak
2011-01-01
Full Text Available The computer model for coordination of fuel spray characteristics with diesel combustion chamber parameters has been created in the paper. The model allows to observe fuel sprays develоpment in diesel cylinder at any moment of injection, to calculate characteristics of fuel sprays with due account of a shape and dimensions of a combustion chamber, timely to change fuel injection characteristics and supercharging parameters, shape and dimensions of a combustion chamber. Moreover the computer model permits to determine parameters of holes in an injector nozzle that provides the required fuel sprays characteristics at the stage of designing a diesel engine. Combustion chamber parameters for 4ЧН11/12.5 diesel engine have been determined in the paper.
Mathematically Modeling Parameters Influencing Surface Roughness in CNC Milling
Engin Nas
2012-01-01
Full Text Available In this study, steel AISI 1050 is subjected to process of face milling in CNC milling machine and such parameters as cutting speed, feed rate, cutting tip, depth of cut influencing the surface roughness are investigated experimentally. Four different experiments are conducted by creating different combinations for parameters. In conducted experiments, cutting tools, which are coated by PVD method used in forcing steel and spheroidal graphite cast iron are used. Surface roughness values, which are obtained by using specified parameters with cutting tools, are measured and correlation between measured surface roughness values and parameters is modeled mathematically by using curve fitting algorithm. Mathematical models are evaluated according to coefficients of determination (R2 and the most ideal one is suggested for theoretical works. Mathematical models, which are proposed for each experiment, are estipulated.
Regionalization parameters of conceptual rainfall-runoff model
Osuch, M.
2003-04-01
Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.
Weibull Parameters Estimation Based on Physics of Failure Model
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Identification of parameters in nonlinear geotechnical models using extenden Kalman filter
Nestorović Tamara
2014-01-01
Full Text Available Direct measurement of relevant system parameters often represents a problem due to different limitations. In geomechanics, measurement of geotechnical material constants which constitute a material model is usually a very diffcult task even with modern test equipment. Back-analysis has proved to be a more effcient and more economic method for identifying material constants because it needs measurement data such as settlements, pore pressures, etc., which are directly measurable, as inputs. Among many model parameter identification methods, the Kalman filter method has been applied very effectively in recent years. In this paper, the extended Kalman filter – local iteration procedure incorporated with finite element analysis (FEA software has been implemented. In order to prove the effciency of the method, parameter identification has been performed for a nonlinear geotechnical model.
MODELING PARAMETERS OF ARC OF ELECTRIC ARC FURNACE
R.N. Khrestin
2015-08-01
Full Text Available Purpose. The aim is to build a mathematical model of the electric arc of arc furnace (EAF. The model should clearly show the relationship between the main parameters of the arc. These parameters determine the properties of the arc and the possibility of optimization of melting mode. Methodology. We have built a fairly simple model of the arc, which satisfies the above requirements. The model is designed for the analysis of electromagnetic processes arc of varying length. We have compared the results obtained when testing the model with the results obtained on actual furnaces. Results. During melting in real chipboard under the influence of changes in temperature changes its properties arc plasma. The proposed model takes into account these changes. Adjusting the length of the arc is the main way to regulate the mode of smelting chipboard. The arc length is controlled by the movement of the drive electrode. The model reflects the dynamic changes in the parameters of the arc when changing her length. We got the dynamic current-voltage characteristics (CVC of the arc for the different stages of melting. We got the arc voltage waveform and identified criteria by which possible identified stage of smelting. Originality. In contrast to the previously known models, this model clearly shows the relationship between the main parameters of the arc EAF: arc voltage Ud, amperage arc id and length arc d. Comparison of the simulation results and experimental data obtained from real particleboard showed the adequacy of the constructed model. It was found that character of change of magnitude Md, helps determine the stage of melting. Practical value. It turned out that the model can be used to simulate smelting in EAF any capacity. Thus, when designing the system of control mechanism for moving the electrode, the model takes into account changes in the parameters of the arc and it can significantly reduce electrode material consumption and energy consumption
Environmental Transport Input Parameters for the Biosphere Model
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis
Inhalation Exposure Input Parameters for the Biosphere Model
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Environmental Transport Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699
Towards Increased Relevance: Context-Adapted Models of the Learning Organization
Örtenblad, Anders
2015-01-01
Purpose: The purposes of this paper are to take a closer look at the relevance of the idea of the learning organization for organizations in different generalized organizational contexts; to open up for the existence of multiple, context-adapted models of the learning organization; and to suggest a number of such models.…
Towards Increased Relevance: Context-Adapted Models of the Learning Organization
Örtenblad, Anders
2015-01-01
Purpose: The purposes of this paper are to take a closer look at the relevance of the idea of the learning organization for organizations in different generalized organizational contexts; to open up for the existence of multiple, context-adapted models of the learning organization; and to suggest a number of such models.…
Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers
2015-01-01
We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...
Control-relevant modeling and simulation of a SOFC-GT hybrid system
Rambabu Kandepu
2006-07-01
Full Text Available In this paper, control-relevant models of the most important components in a SOFC-GT hybrid system are described. Dynamic simulations are performed on the overall hybrid system. The model is used to develop a simple control structure, but the simulations show that more elaborate control is needed.
Construction of constant-Q viscoelastic model with three parameters
SUN Cheng-yu; YIN Xing-yao
2007-01-01
The popularly used viscoelastic models have some shortcomings in describing relationship between quality factor (Q) and frequency, which is not consistent with the observation data. Based on the theory of viscoelasticity, a new approach to construct constant-Q viscoelastic model in given frequency band with three parameters is developed. The designed model describes the frequency-independence feature of quality factor very well, and the effect of viscoelasticity on seismic wave field can be studied relatively accurate in theory with this model. Furthermore, the number of required parameters in this model has been reduced fewer than that of other constant-Q models, this can simplify the solution of the viscoelastic problems to some extent. At last, the accuracy and application range have been analyzed through numerical tests. The effect of viscoelasticity on wave propagation has been briefly illustrated through the change of frequency spectra and waveform in several different viscoelastic models.
Global-scale regionalization of hydrologic model parameters
Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian
2016-05-01
Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.
Bayesian parameter estimation for nonlinear modelling of biological pathways
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Mirror symmetry for two-parameter models, 1
Candelas, Philip; Font, A; Katz, S; Morrison, Douglas Robert Ogston; Candelas, Philip; Ossa, Xenia de la; Font, Anamaria; Katz, Sheldon; Morrison, David R.
1994-01-01
We study, by means of mirror symmetry, the quantum geometry of the K\\"ahler-class parameters of a number of Calabi-Yau manifolds that have $b_{11}=2$. Our main interest lies in the structure of the moduli space and in the loci corresponding to singular models. This structure is considerably richer when there are two parameters than in the various one-parameter models that have been studied hitherto. We describe the intrinsic structure of the point in the (compactification of the) moduli space that corresponds to the large complex structure or classical limit. The instanton expansions are of interest owing to the fact that some of the instantons belong to families with continuous parameters. We compute the Yukawa couplings and their expansions in terms of instantons of genus zero. By making use of recent results of Bershadsky et al. we compute also the instanton numbers for instantons of genus one. For particular values of the parameters the models become birational to certain models with one parameter. The co...
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
Andersen, Lars
2007-01-01
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...
Muscle parameters for musculoskeletal modelling of the human neck
Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.
2011-01-01
Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment
Do Lumped-Parameter Models Provide the Correct Geometrical Damping?
Andersen, Lars
2007-01-01
This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines...
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
Muscle parameters for musculoskeletal modelling of the human neck
Borst, J.; Forbes, P.A.; Happee, R.; Veeger, H.E.J.
2011-01-01
Background: To study normal or pathological neuromuscular control, a musculoskeletal model of the neck has great potential but a complete and consistent anatomical dataset which comprises the muscle geometry parameters to construct such a model is not yet available. Methods: A dissection experiment
Geometry parameters for musculoskeletal modelling of the shoulder system
Van der Helm, F C; Veeger, DirkJan (H. E. J.); Pronk, G M; Van der Woude, L H; Rozendal, R H
1992-01-01
A dynamical finite-element model of the shoulder mechanism consisting of thorax, clavicula, scapula and humerus is outlined. The parameters needed for the model are obtained in a cadaver experiment consisting of both shoulders of seven cadavers. In this paper, in particular, the derivation of geomet
Precise correction to parameter ρ in the littlest Higgs model
Farshid Tabbak; F.Farnoudi
2008-01-01
In this paper tree-level violation of weak isospin parameter,ρ in the flame of the littlest Higgs model is studied.The potentially large deviation from the standard model prediction for the ρ in terms of the littlest Higgs model parameters is calculated.The maximum value for ρ for f ＝ 1 TeV,c ＝ 0.05,c'＝ 0.05and v'= 1.5 GeV is ρ = 1.2973 which means a large enhancement than the SM.
Comparative Analysis of Visco-elastic Models with Variable Parameters
Silviu Nastac
2010-01-01
Full Text Available The paper presents a theoretical comparative study for computational behaviour analysis of vibration isolation elements based on viscous and elastic models with variable parameters. The changing of elastic and viscous parameters can be produced by natural timed evolution demo-tion or by heating developed into the elements during their working cycle. It was supposed both linear and non-linear numerical viscous and elastic models, and their combinations. The results show the impor-tance of numerical model tuning with the real behaviour, as such the characteristics linearity, and the essential parameters for damping and rigidity. Multiple comparisons between linear and non-linear simulation cases dignify the basis of numerical model optimization regarding mathematical complexity vs. results reliability.
Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction
Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad
2010-05-01
Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters
A software for parameter estimation in dynamic models
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Condition Parameter Modeling for Anomaly Detection in Wind Turbines
Yonglong Yan
2014-05-01
Full Text Available Data collected from the supervisory control and data acquisition (SCADA system, used widely in wind farms to obtain operational and condition information about wind turbines (WTs, is of important significance for anomaly detection in wind turbines. The paper presents a novel model for wind turbine anomaly detection mainly based on SCADA data and a back-propagation neural network (BPNN for automatic selection of the condition parameters. The SCADA data sets are determined through analysis of the cumulative probability distribution of wind speed and the relationship between output power and wind speed. The automatic BPNN-based parameter selection is for reduction of redundant parameters for anomaly detection in wind turbines. Through investigation of cases of WT faults, the validity of the automatic parameter selection-based model for WT anomaly detection is verified.
Parameter Estimation of Photovoltaic Models via Cuckoo Search
Jieming Ma
2013-01-01
Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.
Li, Jingfeng; Hwang, Steven W; Shi, Zhicai; Yan, Ning; Yang, Changwei; Wang, Chuanfeng; Zhu, Xiaodong; Hou, Tiesheng; Li, Ming
2011-09-15
A retrospective radiographic study. To investigate which preoperative radiographic parameters best correlate with the angulation and translation of the lowest instrumented vertebra (LIV) and global coronal balance after posterior spinal pedicle screw fixation for thoracolumbar/lumbar (TL/L) adolescent idiopathic scoliosis. Lenke 5C patients with a single, structural TL/L curve can be treated by either an anterior or posterior approach. One of the operative goals when treating Lenke 5C patients is to level and center the LIV, thereby achieving a better global coronal balance. To our knowledge, no study has investigated which specific radiographic parameters correlate with these surgical outcomes after posterior pedicle screw fixation. Twenty-seven patients with TL/L adolescent idiopathic scoliosis were identified in this study, and they underwent posterior fixation and fusion by pedicle screws with a minimum 2-year follow-up. Preoperative and postoperative radiographs were reviewed measuring various radiographic parameters as well as specific measurements related to the LIV. Correlation of these parameters to LIV translation and global and regional coronal balance (C7-central sacral vertical line [CSVL], LIV-CSVL distance) were then evaluated. Four patients demonstrated global coronal imbalance postoperatively by radiographic and clinical evaluation. Regression analysis identified three radiographic parameters that correlated significantly with the postoperative global coronal balance (C7-CSVL): preoperative C7-CSVL (r = 0.44, P = 0.023), preoperative LIV tilt (r = 0.60, P = 0.001), and postoperative LIV tilt (r = 0.65, P = 0.0002). The radiographic parameters that correlated with postoperative LIV-CSVL were: preoperative LIV-CSVL (r = 0.57, P = 0.017), preoperative LIV tilt (r = 0.40, P = 0.04), and postoperative LIV tilt (r = 0.46, P = 0.015). The radiographic parameters correlating to LIV translation were preoperative LIV-CSVL (r = 0.88, P balance. In patients
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Automatic Determination of the Conic Coronal Mass Ejection Model Parameters
Pulkkinen, A.; Oates, T.; Taktakishvili, A.
2009-01-01
Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis
Bulava, John; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Gerhold, Philip; Kallarackal, Jim; Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humbolt-Univ. Berlin (Germany)
2011-12-15
We study a chirally invariant Higgs-Yukawa model regulated on a space-time lattice. We calculate Higgs boson resonance parameters and mass bounds for various values of the mass of the degenerate fermion doublet. Also, first results on the phase transition temperature are presented. In general, this model may be relevant for BSM scenarios with a heavy fourth generation of quarks. (orig.)
Parameter Identification Model for Accelerometer%加速度计参数辨识建模
刘畅; 黄玉清
2014-01-01
In this paper , according to the need of the accelerometer parameters model identification , the accelerometer zero bias through gathering , analyzes the characteristics of the data , combining three meth-ods of system identification , the least squares , the recursive least squares and maximum likelihood esti-mate, relevant mathematics models were established zero deflection data , the simulation based on the model, to observe parameters changing trend of curve , the model parameters and to identify the parame-ters in the model after comparison , proves the reliability of the model , for accelerometer parameters change with time of level provides a reference method .%根据加速度计参数模型辨识的需要，通过采集加速度计零偏，分析了数据的特点，结合最小二乘、递推最小二乘、最大似然估计3种系统辨识方法，建立起零偏数据相关的数学模型，再通过对模型进行仿真，观察参数变化曲线趋势，把模型参数和辨识之后的模型参数作对比，证明了模型的可靠性。
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
Failure analysis of parameter-induced simulation crashes in climate models
D. D. Lucas
2013-01-01
Full Text Available Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2 component of the Community Climate System Model (CCSM4. About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC curve metric (AUC > 0.96. The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Failure analysis of parameter-induced simulation crashes in climate models
D. D. Lucas
2013-08-01
Full Text Available Simulations using IPCC (Intergovernmental Panel on Climate Change-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2 component of the Community Climate System Model (CCSM4. About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC curve metric (AUC > 0.96. The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
CADLIVE optimizer: web-based parameter estimation for dynamic models
Inoue Kentaro
2012-08-01
Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.
Andreas Eisele; Sabine Chabrillat; I. Lau; Kobayashi, C.; B. Wheaton; Carter, D.; Kashimura, O.; Kato, M.; Ong, C.; R. Hewson; Cudahy, T.; Hermann Kaufmann
2011-01-01
With the focus on new available hyperspectral imaging sensors sensitive within the thermal infrared (TIR) wavelength region, this study is testing the ability of the TIR in deriving soil erosion relevant parameters (e.g. texture, organic carbon content) from soil spectral measurements with the respect to commonly used VNIR-SWIR spectrometers. Therefore a study site was chosen located within an agricultural area in Western Australia, which is suffering from soil loss through wind erosion proce...
Reference physiological parameters for pharmacodynamic modeling of liver cancer
Travis, C.C.; Arms, A.D.
1988-01-01
This document presents a compilation of measured values for physiological parameters used in pharamacodynamic modeling of liver cancer. The physiological parameters include body weight, liver weight, the liver weight/body weight ratio, and number of hepatocytes. Reference values for use in risk assessment are given for each of the physiological parameters based on analyses of valid measurements taken from the literature and other reliable sources. The proposed reference values for rodents include sex-specific measurements for the B6C3F{sub 1}, mice and Fishcer 344/N, Sprague-Dawley, and Wistar rats. Reference values are also provided for humans. 102 refs., 65 tabs.
Uncertainty of Modal Parameters Estimated by ARMA Models
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
X-Parameter Based Modelling of Polar Modulated Power Amplifiers
Wang, Yelin; Nielsen, Troels Studsgaard; Sira, Daniel
2013-01-01
X-parameters are developed as an extension of S-parameters capable of modelling non-linear devices driven by large signals. They are suitable for devices having only radio frequency (RF) and DC ports. In a polar power amplifier (PA), phase and envelope of the input modulated signal are applied...... at separate ports and the envelope port is neither an RF nor a DC port. As a result, X-parameters may fail to characterise the effect of the envelope port excitation and consequently the polar PA. This study introduces a solution to the problem for a commercial polar PA. In this solution, the RF-phase path...
A Bayesian framework for parameter estimation in dynamical models.
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Modelling of Water Turbidity Parameters in a Water Treatment Plant
A. S. KOVO
2005-01-01
Full Text Available The high cost of chemical analysis of water has necessitated various researches into finding alternative method of determining portable water quality. This paper is aimed at modelling the turbidity value as a water quality parameter. Mathematical models for turbidity removal were developed based on the relationships between water turbidity and other water criteria. Results showed that the turbidity of water is the cumulative effect of the individual parameters/factors affecting the system. A model equation for the evaluation and prediction of a clarifier’s performance was developed:Model: T = T0(-1.36729 + 0.037101∙10λpH + 0.048928t + 0.00741387∙alkThe developed model will aid the predictive assessment of water treatment plant performance. The limitations of the models are as a result of insufficient variable considered during the conceptualization.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène
2017-07-01
Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Light-Front Spin-1 Model: Parameters Dependence
Mello, Clayton S; de Melo, J P B C; Frederico, T
2015-01-01
We study the structure of the $\\rho$-meson within a light-front model with constituent quark degrees of freedom. We calculate electroweak static observables: magnetic and quadrupole moments, decay constant and charge radius. The prescription used to compute the electroweak quantities is free of zero modes, which makes the calculation implicitly covariant. We compare the results of our model with other ones found in the literature. Our model parameters give a decay constant close to the experimental one.
Cosmological Models with Variable Deceleration Parameter in Lyra's Manifold
Pradhan, A; Singh, C B
2006-01-01
FRW models of the universe have been studied in the cosmological theory based on Lyra's manifold. A new class of exact solutions has been obtained by considering a time dependent displacement field for variable deceleration parameter from which three models of the universe are derived (i) exponential (ii) polynomial and (iii) sinusoidal form respectively. The behaviour of these models of the universe are also discussed. Finally some possibilities of further problems and their investigations have been pointed out.
Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems
Torres-Pomales, Wilfredo
2015-01-01
This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.
Identification of slow molecular order parameters for Markov model construction
Perez-Hernandez, Guillermo; Giorgino, Toni; de Fabritiis, Gianni; Noé, Frank
2013-01-01
A goal in the kinetic characterization of a macromolecular system is the description of its slow relaxation processes, involving (i) identification of the structural changes involved in these processes, and (ii) estimation of the rates or timescales at which these slow processes occur. Most of the approaches to this task, including Markov models, Master-equation models, and kinetic network models, start by discretizing the high-dimensional state space and then characterize relaxation processes in terms of the eigenvectors and eigenvalues of a discrete transition matrix. The practical success of such an approach depends very much on the ability to finely discretize the slow order parameters. How can this task be achieved in a high-dimensional configuration space without relying on subjective guesses of the slow order parameters? In this paper, we use the variational principle of conformation dynamics to derive an optimal way of identifying the "slow subspace" of a large set of prior order parameters - either g...
Solar Model Parameters and Direct Measurements of Solar Neutrino Fluxes
Bandyopadhyay, A; Goswami, S; Petcov, S T; Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati
2006-01-01
We explore a novel possibility of determining the solar model parameters, which serve as input in the calculations of the solar neutrino fluxes, by exploiting the data from direct measurements of the fluxes. More specifically, we use the rather precise value of the $^8B$ neutrino flux, $\\phi_B$ obtained from the global analysis of the solar neutrino and KamLAND data, to derive constraints on each of the solar model parameters on which $\\phi_B$ depends. We also use more precise values of $^7Be$ and $pp$ fluxes as can be obtained from future prospective data and discuss whether such measurements can help in reducing the uncertainties of one or more input parameters of the Standard Solar Model.
IP-Sat: Impact-Parameter dependent Saturation model; revised
Rezaeian, Amir H; Van de Klundert, Merijn; Venugopalan, Raju
2013-01-01
In this talk, we present a global analysis of available small-x data on inclusive DIS and exclusive diffractive processes, including the latest data from the combined HERA analysis on reduced cross sections within the Impact-Parameter dependent Saturation (IP-Sat) Model. The impact-parameter dependence of dipole amplitude is crucial in order to have a unified description of both inclusive and exclusive diffractive processes. With the parameters of model fixed via a fit to the high-precision reduced cross-section, we compare model predictions to data for the structure functions, the longitudinal structure function, the charm structure function, exclusive vector mesons production and Deeply Virtual Compton Scattering (DVCS). Excellent agreement is obtained for the processes considered at small x in a wide range of Q^2.
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory stu
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory
SPOTting model parameters using a ready-made Python package
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Modelling of intermittent microwave convective drying: parameter sensitivity
Zhang Zhijun
2017-06-01
Full Text Available The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Modelling of intermittent microwave convective drying: parameter sensitivity
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Water quality modelling for ephemeral rivers: Model development and parameter assessment
Mannina, Giorgio; Viviani, Gaspare
2010-11-01
SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to
Comparing spatial and temporal transferability of hydrological model parameters
Patil, Sopan; Stieglitz, Marc
2015-04-01
Operational use of hydrological models requires the transfer of calibrated parameters either in time (for streamflow forecasting) or space (for prediction at ungauged catchments) or both. Although the effects of spatial and temporal parameter transfer on catchment streamflow predictions have been well studied individually, a direct comparison of these approaches is much less documented. In our view, such comparison is especially pertinent in the context of increasing appeal and popularity of the "trading space for time" approaches that are proposed for assessing the hydrological implications of anthropogenic climate change. Here, we compare three different schemes of parameter transfer, viz., temporal, spatial, and spatiotemporal, using a spatially lumped hydrological model called EXP-HYDRO at 294 catchments across the continental United States. Results show that the temporal parameter transfer scheme performs best, with lowest decline in prediction performance (median decline of 4.2%) as measured using the Kling-Gupta efficiency metric. More interestingly, negligible difference in prediction performance is observed between the spatial and spatiotemporal parameter transfer schemes (median decline of 12.4% and 13.9% respectively). We further demonstrate that the superiority of temporal parameter transfer scheme is preserved even when: (1) spatial distance between donor and receiver catchments is reduced, or (2) temporal lag between calibration and validation periods is increased. Nonetheless, increase in the temporal lag between calibration and validation periods reduces the overall performance gap between the three parameter transfer schemes. Results suggest that spatiotemporal transfer of hydrological model parameters has the potential to be a viable option for climate change related hydrological studies, as envisioned in the "trading space for time" framework. However, further research is still needed to explore the relationship between spatial and temporal
Berezovska, Ganna; Mostarda, Stefano; Rao, Francesco
2012-01-01
Molecular simulations as well as single molecule experiments have been widely analyzed in terms order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such description is not accurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account of the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-State-Models of the system under study. Application of the methodology to theoretical models with a noisy orde...
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
J-A Hysteresis Model Parameters Estimation using GA
Bogomir Zidaric
2005-01-01
Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.
A new estimate of the parameters in linear mixed models
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Models wagging the dog: are circuits constructed with disparate parameters?
Nowotny, Thomas; Szücs, Attila; Levi, Rafael; Selverston, Allen I
2007-08-01
In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.
Do land parameters matter in large-scale hydrological modelling?
Gudmundsson, Lukas; Seneviratne, Sonia I.
2013-04-01
Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land
Lukas Graber; Diomar Infante; Michael Steurer; William W. Brey
2011-01-01
Careful analysis of transients in shipboard power systems is important to achieve long life times of the com ponents in future all-electric ships. In order to accomplish results with high accuracy, it is recommended to validate cable models as they have significant influence on the amplitude and frequency spectrum of voltage transients. The authors propose comparison of model and measurement using scattering parameters. They can be easily obtained from measurement and simulation and deliver broadband information about the accuracy of the model. The measurement can be performed using a vector network analyzer. The process to extract scattering parameters from simulation models is explained in detail. Three different simulation models of a 5 kV XLPE power cable have been validated. The chosen approach delivers an efficient tool to quickly estimate the quality of a model.
Inhalation Exposure Input Parameters for the Biosphere Model
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This
Considerations for parameter optimization and sensitivity in climate models.
Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E
2010-12-14
Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models.
Uncertainty of Modal Parameters Estimated by ARMA Models
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param......In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty...... by a simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been chosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore...
Iterative integral parameter identification of a respiratory mechanics model
Schranz Christoph
2012-07-01
Full Text Available Abstract Background Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual’s model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. Methods An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS patients. Results The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. Conclusion These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
The LAILAPS search engine: a feature model for relevance ranking in life science databases.
Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe
2010-03-25
Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series
Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik
2016-06-01
Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model
Joint Dynamics Modeling and Parameter Identification for Space Robot Applications
Adenilson R. da Silva
2007-01-01
Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.
Mathematical Modelling and Parameter Optimization of Pulsating Heat Pipes
Yang, Xin-She; Luan, Tao; Koziel, Slawomir
2014-01-01
Proper heat transfer management is important to key electronic components in microelectronic applications. Pulsating heat pipes (PHP) can be an efficient solution to such heat transfer problems. However, mathematical modelling of a PHP system is still very challenging, due to the complexity and multiphysics nature of the system. In this work, we present a simplified, two-phase heat transfer model, and our analysis shows that it can make good predictions about startup characteristics. Furthermore, by considering parameter estimation as a nonlinear constrained optimization problem, we have used the firefly algorithm to find parameter estimates efficiently. We have also demonstrated that it is possible to obtain good estimates of key parameters using very limited experimental data.
The influences of model parameters on the characteristics of memristors
Zhou Jing; Huang Da
2012-01-01
As the fourth passive circuit component,a memristor is a nonlinear resistor that can "remember" the amount of charge passing through it.The characteristic of "remembering" the charge and non-volatility makes memristors great potential candidates in many fields.Nowadays,only a few groups have the ability to fabricate memristors,and most researchers study them by theoretic analysis and simulation.In this paper,we first analyse the theoretical base and characteristics of memristors,then use a simulation program with integrated circuit emphasis as our tool to simulate the theoretical model of memristors and change the parameters in the model to see the influence of each parameter on the characteristics.Our work supplies researchers engaged in memristor-based circuits with advice on how to choose the proper parameters.
Prediction of interest rate using CKLS model with stochastic parameters
Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)
2014-06-19
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.
Preterm piglets are a clinically relevant model of pediatric GI disease
The goal of our research is to establish how nutritional support, enteral versus parenteral, affects gut function and susceptibility to disease in early development. We and others have used the neonatal pig to establish unique models of clinically relevant problems in pediatric gastroenterology, esp...
YANG Jing
2014-01-01
Taken discourse production as the research objective, it holds that discourse production is dynamic in human communi-cation. It attempts to analyze the dynamics on the basis of Relevance-adaption model from the perspective of cognitive pragmat-ics and explain the role of the context dynamics that plays in the discourse production.
Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.
Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W
2015-07-23
A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Calculation of Thermodynamic Parameters for Freundlich and Temkin Isotherm Models
ZHANGZENGQIANG; ZHANGYIPING; 等
1999-01-01
Derivation of the Freundlich and Temkin isotherm models from the kinetic adsorption/desorption equations was carried out to calculate their thermodynamic equilibrium constants.The calculation formulase of three thermodynamic parameters,the standard molar Gibbs free energy change,the standard molar enthalpy change and the standard molar entropy change,of isothermal adsorption processes for Freundlich and Temkin isotherm models were deduced according to the relationship between the thermodynamic equilibrium constants and the temperature.
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parabolic problems with parameters arising in evolution model for phytromediation
Sahmurova, Aida; Shakhmurov, Veli
2012-12-01
The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).
Lumped-parameter Model of a Bucket Foundation
Andersen, Lars; Ibsen, Lars Bo; Liingaard, Morten
2009-01-01
As an alternative to gravity footings or pile foundations, offshore wind turbines at shallow water can be placed on a bucket foundation. The present analysis concerns the development of consistent lumped-parameter models for this type of foundation. The aim is to formulate a computationally effic...
Improved parameter estimation for hydrological models using weighted object functions
Stein, A.; Zaadnoordijk, W.J.
1999-01-01
This paper discusses the sensitivity of calibration of hydrological model parameters to different objective functions. Several functions are defined with weights depending upon the hydrological background. These are compared with an objective function based upon kriging. Calibration is applied to pi
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
Modeling and simulation of HTS cables for scattering parameter analysis
Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June
2016-11-01
Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.
Evaluation of some infiltration models and hydraulic parameters
Haghighi, F.; Gorji, M.; Shorafa, M.; Sarmadian, F.; Mohammadi, M. H.
2010-07-01
The evaluation of infiltration characteristics and some parameters of infiltration models such as sorptivity and final steady infiltration rate in soils are important in agriculture. The aim of this study was to evaluate some of the most common models used to estimate final soil infiltration rate. The equality of final infiltration rate with saturated hydraulic conductivity (Ks) was also tested. Moreover, values of the estimated sorptivity from the Philips model were compared to estimates by selected pedotransfer functions (PTFs). The infiltration experiments used the doublering method on soils with two different land uses in the Taleghan watershed of Tehran province, Iran, from September to October, 2007. The infiltration models of Kostiakov-Lewis, Philip two-term and Horton were fitted to observed infiltration data. Some parameters of the models and the coefficient of determination goodness of fit were estimated using MATLAB software. The results showed that, based on comparing measured and model-estimated infiltration rate using root mean squared error (RMSE), Hortons model gave the best prediction of final infiltration rate in the experimental area. Laboratory measured Ks values gave significant differences and higher values than estimated final infiltration rates from the selected models. The estimated final infiltration rate was not equal to laboratory measured Ks values in the study area. Moreover, the estimated sorptivity factor by Philips model was significantly different to those estimated by selected PTFs. It is suggested that the applicability of PTFs is limited to specific, similar conditions. (Author) 37 refs.
Agricultural and Environmental Input Parameters for the Biosphere Model
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Estimating model parameters in nonautonomous chaotic systems using synchronization
Yang, Xiaoli [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)]. E-mail: yangxl205@mail.nwpu.edu.cn; Xu, Wei [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Sun, Zhongkui [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2007-05-07
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Soil-Related Input Parameters for the Biosphere Model
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure
Multiscale Parameter Regionalization for consistent global water resources modelling
Wanders, Niko; Wood, Eric; Pan, Ming; Samaniego, Luis; Thober, Stephan; Kumar, Rohini; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc F. P.
2017-04-01
Due to an increasing demand for high- and hyper-resolution water resources information, it has become increasingly important to ensure consistency in model simulations across scales. This consistency can be ensured by scale independent parameterization of the land surface processes, even after calibration of the water resource model. Here, we use the Multiscale Parameter Regionalization technique (MPR, Samaniego et al. 2010, WRR) to allow for a novel, spatially consistent, scale independent parameterization of the global water resource model PCR-GLOBWB. The implementation of MPR in PCR-GLOBWB allows for calibration at coarse resolutions and subsequent parameter transfer to the hyper-resolution. In this study, the model was calibrated at 50 km resolution over Europe and validation carried out at resolutions of 50 km, 10 km and 1 km. MPR allows for a direct transfer of the calibrated transfer function parameters across scales and we find that we can maintain consistent land-atmosphere fluxes across scales. Here we focus on the 2003 European drought and show that the new parameterization allows for high-resolution calibrated simulations of water resources during the drought. For example, we find a reduction from 29% to 9.4% in the percentile difference in the annual evaporative flux across scales when compared against default simulations. Soil moisture errors are reduced from 25% to 6.9%, clearly indicating the benefits of the MPR implementation. This new parameterization allows us to show more spatial detail in water resources simulations that are consistent across scales and also allow validation of discharge for smaller catchments, even with calibrations at a coarse 50 km resolution. The implementation of MPR allows for novel high-resolution calibrated simulations of a global water resources model, providing calibrated high-resolution model simulations with transferred parameter sets from coarse resolutions. The applied methodology can be transferred to other
Model and parameter uncertainty in IDF relationships under climate change
Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.
2015-05-01
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.
Anthony Anukam
2015-01-01
Full Text Available Torrefaction of sugarcane bagasse was conducted in an electric muffle furnace at 200, 250, and 300°C in order to establish the impact of heat treatment temperature on various parameters and as a method to improve sugarcane bagasse characteristics for the purpose of gasification. The results show that weight loss of bagasse reduced as temperature of torrefaction increased due to excessive devolatilization. A reduced moisture and volatile matter content as well as improved calorific value were also achieved with increasing temperature of torrefaction. The torrefaction progress was again followed by elemental analysis of the material which showed the presence of C, H, and O in varying proportions depending on torrefaction temperature. The decrease in the weight percentages of O2 and H2 as torrefaction reaction temperature increased resulted in the accumulation of C in the solid product. The thermogravimetric analysis conducted established the maximum reactivity temperature of the torrefied material and revealed that the degradation of torrefied sugarcane bagasse was accelerated by thermal treatment of the material prior to analysis. Finally, the study established that torrefaction at 300°C led to a much more degraded material compared to the lower torrefaction reaction temperatures of 200 and 250°C, respectively.
Klöckner, Wolf; Gacem, Riad; Anderlei, Tibor; Raven, Nicole; Schillberg, Stefan; Lattermann, Clemens; Büchs, Jochen
2013-12-02
Among disposable bioreactor systems, cylindrical orbitally shaken bioreactors show important advantages. They provide a well-defined hydrodynamic flow combined with excellent mixing and oxygen transfer for mammalian and plant cell cultivations. Since there is no known universal correlation between the volumetric mass transfer coefficient for oxygen kLa and relevant operating parameters in such bioreactor systems, the aim of this current study is to experimentally determine a universal kLa correlation. A Respiration Activity Monitoring System (RAMOS) was used to measure kLa values in cylindrical disposable shaken bioreactors and Buckingham's π-Theorem was applied to define a dimensionless equation for kLa. In this way, a scale- and volume-independent kLa correlation was developed and validated in bioreactors with volumes from 2 L to 200 L. The final correlation was used to calculate cultivation parameters at different scales to allow a sufficient oxygen supply of tobacco BY-2 cell suspension cultures. The resulting equation can be universally applied to calculate the mass transfer coefficient for any of seven relevant cultivation parameters such as the reactor diameter, the shaking frequency, the filling volume, the viscosity, the oxygen diffusion coefficient, the gravitational acceleration or the shaking diameter within an accuracy range of +/- 30%. To our knowledge, this is the first kLa correlation that has been defined and validated for the cited bioreactor system on a bench-to-pilot scale.
Reduced parameter model on trajectory tracking data with applications
王正明; 朱炬波
1999-01-01
The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Prediction of mortality rates using a model with stochastic parameters
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
Probabilistic Constraint Programming for Parameters Optimisation of Generative Models
Zanin, Massimiliano; Sousa, Pedro A C; Cruz, Jorge
2015-01-01
Complex networks theory has commonly been used for modelling and understanding the interactions taking place between the elements composing complex systems. More recently, the use of generative models has gained momentum, as they allow identifying which forces and mechanisms are responsible for the appearance of given structural properties. In spite of this interest, several problems remain open, one of the most important being the design of robust mechanisms for finding the optimal parameters of a generative model, given a set of real networks. In this contribution, we address this problem by means of Probabilistic Constraint Programming. By using as an example the reconstruction of networks representing brain dynamics, we show how this approach is superior to other solutions, in that it allows a better characterisation of the parameters space, while requiring a significantly lower computational cost.
Mark-recapture models with parameters constant in time.
Jolly, G M
1982-06-01
The Jolly-Seber method, which allows for both death and immigration, is easy to apply but often requires a larger number of parameters to be estimated tha would otherwise be necessary. If (i) survival rate, phi, or (ii) probability of capture, p, or (iii) both phi and p can be assumed constant over the experimental period, models with a reduced number of parameters are desirable. In the present paper, maximum likelihood (ML) solutions for these three situations are derived from the general ML equations of Jolly [1979, in Sampling Biological Populations, R. M. Cormack, G. P. Patil and D. S. Robson (eds), 277-282]. A test is proposed for heterogeneity arising from a breakdown of assumptions in the general Jolly-Seber model. Tests for constancy of phi and p are provided. An example is given, in which these models are fitted to data from a local butterfly population.
Enhancing debris flow modeling parameters integrating Bayesian networks
Graf, C.; Stoffel, M.; Grêt-Regamey, A.
2009-04-01
Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk
Singularity of Some Software Reliability Models and Parameter Estimation Method
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Parameter Identifiability of Ship Manoeuvring Modeling Using System Identification
Weilin Luo
2016-01-01
Full Text Available To improve the feasibility of system identification in the prediction of ship manoeuvrability, several measures are presented to deal with the parameter identifiability in the parametric modeling of ship manoeuvring motion based on system identification. Drift of nonlinear hydrodynamic coefficients is explained from the point of view of regression analysis. To diminish the multicollinearity in a complicated manoeuvring model, difference method and additional signal method are employed to reconstruct the samples. Moreover, the structure of manoeuvring model is simplified based on correlation analysis. Manoeuvring simulation is performed to demonstrate the validity of the measures proposed.
Animal models of Parkinson's disease: limits and relevance to neuroprotection studies.
Bezard, Erwan; Yue, Zhenyu; Kirik, Deniz; Spillantini, Maria Grazia
2013-01-01
Over the last two decades, significant strides has been made toward acquiring a better knowledge of both the etiology and pathogenesis of Parkinson's disease (PD). Experimental models are of paramount importance to obtain greater insights into the pathogenesis of the disease. Thus far, neurotoxin-based animal models have been the most popular tools employed to produce selective neuronal death in both in vitro and in vivo systems. These models have been commonly referred to as the pathogenic models. The current trend in modeling PD revolves around what can be called the disease gene-based models or etiologic models. The value of utilizing multiple models with a different mechanism of insult rests on the premise that dopamine-producing neurons die by stereotyped cascades that can be activated by a range of insults, from neurotoxins to downregulation and overexpression of disease-related genes. In this position article, we present the relevance of both pathogenic and etiologic models as well as the concept of clinically relevant designs that, we argue, should be utilized in the preclinical development phase of new neuroprotective therapies before embarking into clinical trials.
Robust linear parameter varying induction motor control with polytopic models
Dalila Khamari
2013-01-01
Full Text Available This paper deals with a robust controller for an induction motor which is represented as a linear parameter varying systems. To do so linear matrix inequality (LMI based approach and robust Lyapunov feedback controller are associated. This new approach is related to the fact that the synthesis of a linear parameter varying (LPV feedback controller for the inner loop take into account rotor resistance and mechanical speed as varying parameter. An LPV flux observer is also synthesized to estimate rotor flux providing reference to cited above regulator. The induction motor is described as a polytopic model because of speed and rotor resistance affine dependence their values can be estimated on line during systems operations. The simulation results are presented to confirm the effectiveness of the proposed approach where robustness stability and high performances have been achieved over the entire operating range of the induction motor.
Minimum information modelling of structural systems with uncertain parameters
Hyland, D. C.
1983-01-01
Work is reviewed wherein the design of active structural control is formulated as the mean-square optimal control of a linear mechanical system with stochastic parameters. In practice, a complete probabilistic description of model parameters can never be provided by empirical determinations, and a suitable design approach must accept very limited a priori data on parameter statistics. In consequence, the mean-square optimization problem is formulated using a complete probability assignment which is made to be consistent with available data but maximally unconstrained otherwise through use of a maximum entropy principle. The ramifications of this approach for both robustness and large dimensionality are illustrated by consideration of the full-state feedback regulation problem.
Parameter estimation in a spatial unit root autoregressive model
Baran, Sándor
2011-01-01
Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.
The relevance of non-human primate and rodent malaria models for humans
Riley Eleanor
2011-02-01
Full Text Available Abstract At the 2010 Keystone Symposium on "Malaria: new approaches to understanding Host-Parasite interactions", an extra scientific session to discuss animal models in malaria research was convened at the request of participants. This was prompted by the concern of investigators that skepticism in the malaria community about the use and relevance of animal models, particularly rodent models of severe malaria, has impacted on funding decisions and publication of research using animal models. Several speakers took the opportunity to demonstrate the similarities between findings in rodent models and human severe disease, as well as points of difference. The variety of malaria presentations in the different experimental models parallels the wide diversity of human malaria disease and, therefore, might be viewed as a strength. Many of the key features of human malaria can be replicated in a variety of nonhuman primate models, which are very under-utilized. The importance of animal models in the discovery of new anti-malarial drugs was emphasized. The major conclusions of the session were that experimental and human studies should be more closely linked so that they inform each other, and that there should be wider access to relevant clinical material.
Bongers, Mathilda L; de Ruysscher, Dirk; Oberije, Cary; Lambin, Philippe; Uyl-de Groot, Carin A; Coupé, V M H
2016-01-01
With the shift toward individualized treatment, cost-effectiveness models need to incorporate patient and tumor characteristics that may be relevant to treatment planning. In this study, we used multistate statistical modeling to inform a microsimulation model for cost-effectiveness analysis of individualized radiotherapy in lung cancer. The model tracks clinical events over time and takes patient and tumor features into account. Four clinical states were included in the model: alive without progression, local recurrence, metastasis, and death. Individual patients were simulated by repeatedly sampling a patient profile, consisting of patient and tumor characteristics. The transitioning of patients between the health states is governed by personalized time-dependent hazard rates, which were obtained from multistate statistical modeling (MSSM). The model simulations for both the individualized and conventional radiotherapy strategies demonstrated internal and external validity. Therefore, MSSM is a useful technique for obtaining the correlated individualized transition rates that are required for the quantification of a microsimulation model. Moreover, we have used the hazard ratios, their 95% confidence intervals, and their covariance to quantify the parameter uncertainty of the model in a correlated way. The obtained model will be used to evaluate the cost-effectiveness of individualized radiotherapy treatment planning, including the uncertainty of input parameters. We discuss the model-building process and the strengths and weaknesses of using MSSM in a microsimulation model for individualized radiotherapy in lung cancer.
Recursive modular modelling methodology for lumped-parameter dynamic systems.
Orsino, Renato Maia Matarazzo
2017-08-01
This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.
Zhou, Liming; Yang, Yuxing; Yuan, Shiying
2006-02-01
A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Determining avalanche modelling input parameters using terrestrial laser scanning technology
2013-01-01
International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...
Numerical model for thermal parameters in optical materials
Sato, Yoichi; Taira, Takunori
2016-04-01
Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.
Land Building Models: Uncertainty in and Sensitivity to Input Parameters
2013-08-01
Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a
The oblique S parameter in higgsless electroweak models
Rosell, Ignasi
2012-01-01
We present a one-loop calculation of the oblique S parameter within Higgsless models of electroweak symmetry breaking. We have used a general effective Lagrangian with at most two derivatives, implementing the chiral symmetry breaking SU(2)_L x SU(2)_R -> SU(2)_{L+R} with Goldstones, gauge bosons and one multiplet of vector and axial-vector resonances. The estimation is based on the short-distance constraints and the dispersive approach proposed by Peskin and Takeuchi.
A statistical model of proton with no parameter
Zhang, Y; Zhang, Yongjun; Yang, Li-Ming
2001-01-01
In this text, the protons are taken as an ensemble of Fock states. Using detailed balancing principle and equal probability principle, the unpolarized parton distribution of proton is gained through Monte Carlo without any parameter. A new origin of the light flavor sea-quark asymmetry is given here beside known models as Pauli blocking, meson-cloud, chiral-field, chiral-soliton and instantons.
Model of the Stochastic Vacuum and QCD Parameters
Ferreira, E; Ferreira, Erasmo; Pereira, Flávio
1997-01-01
Accounting for the two independent correlation functions of the QCD vacuum, we improve the simple and consistent description given by the model of the stochastic vacuum to the high-energy pp and pbar-p data, with a new determination of parameters of non-perturbative QCD. The increase of the hadronic radii with the energy accounts for the energy dependence of the observables.
Astudillo, Viviana González; Hernández, Sonia M; Kistler, Whitney M; Boone, Shaun L; Lipp, Erin K; Shrestha, Sudip; Yabsley, Michael J
2013-12-01
, host biology, and vector biology into comprehensive models on parasite ecology. Detailed morphological examination of these parasites is also necessary to determine if closely related haplotypes represent single species or morphologically distinct, but closely related, haplotypes.
Is flow velocity a significant parameter in flood damage modelling?
H. Kreibich
2009-10-01
Full Text Available Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2 m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.
A robust approach for the determination of Gurson model parameters
R. Sepe
2016-07-01
Full Text Available Among the most promising models introduced in recent years, with which it is possible to obtain very useful results for a better understanding of the physical phenomena involved in the macroscopic mechanism of crack propagation, the one proposed by Gurson and Tvergaard links the propagation of a crack to the nucleation, growth and coalescence of micro-voids, which is likely to connect the micromechanical characteristics of the component under examination to crack initiation and propagation up to a macroscopic scale. It must be pointed out that, even if the statistical character of some of the many physical parameters involved in the said model has been put in evidence, no serious attempt has been made insofar to link the corresponding statistic to the experimental and macroscopic results, as for example crack initiation time, material toughness, residual strength of the cracked component (R-Curve, and so on. In this work, such an analysis was carried out in a twofold way: the former concerned the study of the influence exerted by each of the physical parameters on the material toughness, and the latter concerned the use of the Stochastic Design Improvement (SDI technique to perform a “robust” numerical calibration of the model evaluating the nominal values of the physical and correction parameters, which fit a particular experimental result even in the presence of their “natural” variability.
Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine
Hang-cheong Wong
2012-01-01
Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.
Lambert, C.; Virgili, A.; Pettex, E.; Delavenne, J.; Toison, V.; Blanck, A.; Ridoux, V.
2017-07-01
According to the European Union Habitats and Birds Directives, EU Member States must extend the Natura 2000 network to marine ecosystems, through the designation of Marine Protected Areas (MPAs). However, the initial status of cetacean and seabird communities across European waters is often poorly understood. It is assumed that an MPA is justified where at least 1% of the ;national population; of a species is present during at least part of its biological cycle. The aim of the present work was to use model-based cetacean and seabird distribution to assess the networks of existing Natura 2000 sites and offshore proposed areas of biological interest. The habitat models used here were Generalised Additive Models computed from aerial surveys observational data collected during the winter 2011-2012 and the summer 2012 across the English Channel, Bay of Biscay and north-western Mediterranean Sea. Based on these models, a ratio between species relative abundance predicted within each MPA and the total relative abundance predicted over the French Atlantic or Mediterranean marine regions was computed and compared to the 1% threshold. This assessment was conducted for winter and summer independently, providing information for assessing the relevance of individual MPAs and MPA networks at a seasonal scale. Our results showed that the existing network designed for coastal seabird species was relevant in both marine regions. In contrast, a clear shortfall was identified for offshore seabird species in the Atlantic region and for cetaceans in both regions. Moreover, the size of MPAs appeared to be a crucial feature, with larger MPAs being relevant for more species. Finally, we showed that the proposed large offshore areas of interest would constitute a highly relevant network for all offshore species, with e.g. up to 61% of the Globicephalinae population in the Atlantic French waters being present within these areas.
Can a snow structure model estimate snow characteristics relevant to reindeer husbandry?
Sirpa Rasmus
2014-02-01
Full Text Available Snow affects foraging conditions of reindeer e.g. by increasing the energy expenditures for moving and digging work or, in contrast, by making access of arboreal lichen easier. Still the studies concentrating on the role of the snow pack structure on reindeer population dynamics and reindeer management are few. We aim to find out which of the snow characteristics are relevant for reindeer in the northern boreal zone according to the experiences of reindeer herders and is this relevance seen also in reproduction rate of reindeer in this area. We also aim to validate the ability of the snow model SNOWPACK to reliably estimate the relevant snow structure characteristics. We combined meteorological observations, snow structure simulations by the model SNOWPACK and annual reports by reindeer herders during winters 1972-2010 in the Muonio reindeer herding district, northern Finland. Deep snow cover and late snow melt were the most common unfavorable conditions reported. Problematic conditions related to snow structure were icy snow and ground ice or unfrozen ground below the snow, leading to mold growth on ground vegetation. Calf production percentage was negatively correlated to the measured annual snow depth and length of the snow cover time and to the simulated snow density. Winters with icy snow could be distinguished in three out of four reported cases by SNOWPACK simulations and we could detect reliably winters with conditions favorable for mold growth. Both snow amount and also quality affects the reindeer herding and reindeer reproduction rate in northern Finland. Model SNOWPACK can relatively reliably estimate the relevant structural properties of snow. Use of snow structure models could give valuable information about grazing conditions, especially when estimating the possible effects of warming winters on reindeer populations and reindeer husbandry. Similar effects will be experienced also by other arctic and boreal species.
Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
Nonlocal order parameters for the 1D Hubbard model.
Montorsi, Arianna; Roncaglia, Marco
2012-12-07
We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Bühlmayer, Lucia; Birrer, Daniel; Röthlin, Philipp; Faude, Oliver; Donath, Lars
2017-06-29
Mindfulness as a present-oriented form of mental training affects cognitive processes and is increasingly considered meaningful for sport psychological training approaches. However, few intervention studies have examined the effects of mindfulness practice on physiological and psychological performance surrogates or on performance outcomes in sports. The aim of the present meta-analytical review was to examine the effects of mindfulness practice or mindfulness-based interventions on physiological and psychological performance surrogates and on performance outcomes in sports in athletes over 15 years of age. A structured literature search was conducted in six electronic databases (CINAHL, EMBASE, ISI Web of Knowledge, PsycINFO, MEDLINE and SPORTDiscus). The following search terms were used with Boolean conjunction: (mindful* OR meditat* OR yoga) AND (sport* OR train* OR exercis* OR intervent* OR perform* OR capacity OR skill*) AND (health* OR adult* OR athlete*). Randomized and non-randomized controlled studies that compared mindfulness practice techniques as an intervention with an inactive control or a control that followed another psychological training program in healthy sportive participants were screened for eligibility. Eligibility and study quality [Physiotherapy Evidence Database (PEDro)] scales were independently assessed by two researchers. A third independent researcher was consulted to achieve final consensus in case of disagreement between both researchers. Standardized mean differences (SMDs) were calculated as weighted Hedges' g and served as the main outcomes in comparing mindfulness practice versus control. Statistical analyses were conducted using a random-effects inverse-variance model. Nine trials of fair study quality (mean PEDro score 5.4, standard deviation 1.1) with 290 healthy sportive participants (athletics, cyclists, dart throwers, hammer throwers, hockey players, hurdlers, judo fighters, rugby players, middle-distance runners, long
Comparison of parameter estimation algorithms in hydrological modelling
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Order-parameter model for unstable multilane traffic flow
Lubashevsky; Mahnke
2000-11-01
We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the "free flow synchronized mode jam" phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the "one-particle" distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of "fast" drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the "free flow synchronized motion" phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Humbird, Kelli; Peterson, J. Luc; Brandon, Scott; Field, John; Nora, Ryan; Spears, Brian
2016-10-01
Next-generation supercomputer architecture and in-transit data analysis have been used to create a large collection of 2-D ICF capsule implosion simulations. The database includes metrics for approximately 60,000 implosions, with x-ray images and detailed physics parameters available for over 20,000 simulations. To map and explore this large database, surrogate models for numerous quantities of interest are built using supervised machine learning algorithms. Response surfaces constructed using the predictive capabilities of the surrogates allow for continuous exploration of parameter space without requiring additional simulations. High performing regions of the input space are identified to guide the design of future experiments. In particular, a model for the yield built using a random forest regression algorithm has a cross validation score of 94.3% and is consistently conservative for high yield predictions. The model is used to search for robust volumes of parameter space where high yields are expected, even given variations in other input parameters. Surrogates for additional quantities of interest relevant to ignition are used to further characterize the high yield regions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697277.
Optimal vibration control of curved beams using distributed parameter models
Liu, Fushou; Jin, Dongping; Wen, Hao
2016-12-01
The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
2016-06-01
Modeling Relevant to Safe Operations of U.S. Navy Vessels in Arctic Conditions.” The program manager was Dr. Paul Hess in Code 331, Structural...of Ice–Structure Interaction. Engineering Fracture Mechanics 68:1923–60. Jordaan, I. J., M. A. Maes, P. W. Brown, and I. P. Hermans . 1993
Modeling of state parameter and hardening function for granular materials
彭芳乐; 李建中
2004-01-01
A modified plastic strain energy as hardening state parameter for dense sand was proposed, based on the results from a series of drained plane strain tests on saturated dense Japanese Toyoura sand with precise stress and strain measurements along many stress paths. In addition, a unique hardening function between the plastic strain energy and the instantaneous stress path was also presented, which was independent of stress history. The proposed state parameter and hardening function was directly verified by the simple numerical integration method. It is shown that the proposed hardening function is independent of stress history and stress path and is appropriate to be used as the hardening rule in constitutive modeling for dense sand, and it is also capable of simulating the effects on the deformation characteristics of stress history and stress path for dense sand.
Parameter Estimation of the Extended Vasiček Model
Sanae RUJIVAN
2010-01-01
Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.
Ducoste, J.; Brauer, R.
1999-07-01
Analysis of a computational fluid dynamics (CFD) model for a water treatment plant clearwell was done. Model parameters were analyzed to determine their influence on the effluent-residence time distribution (RTD) function. The study revealed that several model parameters could have significant impact on the shape of the RTD function and consequently raise the level of uncertainty on accurate predictions of clearwell hydraulics. The study also revealed that although the modeler could select a distribution of values for some of the model parameters, most of these values can be ruled out by requiring the difference between the calculated and theoretical hydraulic retention time to within 5% of the theoretical value.
Strong parameter renormalization from optimum lattice model orbitals
Brosco, Valentina; Ying, Zu-Jian; Lorenzana, José
2017-01-01
Which is the best single-particle basis to express a Hubbard-like lattice model? A rigorous variational answer to this question leads to equations the solution of which depends in a self-consistent manner on the lattice ground state. Contrary to naive expectations, for arbitrary small interactions, the optimized orbitals differ from the noninteracting ones, leading also to substantial changes in the model parameters as shown analytically and in an explicit numerical solution for a simple double-well one-dimensional case. At strong coupling, we obtain the direct exchange interaction with a very large renormalization with important consequences for the explanation of ferromagnetism with model Hamiltonians. Moreover, in the case of two atoms and two fermions we show that the optimization equations are closely related to reduced density-matrix functional theory, thus establishing an unsuspected correspondence between continuum and lattice approaches.
Multi-parameter models of innovation diffusion on complex networks
McCullen, Nicholas J; Bale, Catherine S E; Foxon, Tim J; Gale, William F
2012-01-01
A model, applicable to a range of innovation diffusion applications with a strong peer to peer component, is developed and studied, along with methods for its investigation and analysis. A particular application is to individual households deciding whether to install an energy efficiency measure in their home. The model represents these individuals as nodes on a network, each with a variable representing their current state of adoption of the innovation. The motivation to adopt is composed of three terms, representing personal preference, an average of each individual's network neighbours' states and a system average, which is a measure of the current social trend. The adoption state of a node changes if a weighted linear combination of these factors exceeds some threshold. Numerical simulations have been carried out, computing the average uptake after a sufficient number of time-steps over many realisations at a range of model parameter values, on various network topologies, including random (Erdos-Renyi), s...
Reconstructing parameters of spreading models from partial observations
Lokhov, Andrey Y
2016-01-01
Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs.
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Connecting Global to Local Parameters in Barred Galaxy Models
N. D. Caranicolas
2002-09-01
We present connections between global and local parameters in a realistic dynamical model, describing motion in a barred galaxy. Expanding the global model in the vicinity of a stable Lagrange point, we find the potential of a two-dimensional perturbed harmonic oscillator, which describes local motion near the centre of the global model. The frequencies of oscillations and the coefficients of the perturbing terms are not arbitrary but are connected to the mass, the angular rotation velocity, the scale length and the strength of the galactic bar. The local energy is also connected to the global energy. A comparison of the properties of orbits in the global and local potential is also made.
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
Parameter sensitivity in satellite-gravity-constrained geothermal modelling
Pastorutti, Alberto; Braitenberg, Carla
2017-04-01
The use of satellite gravity data in thermal structure estimates require identifying the factors that affect the gravity field and are related to the thermal characteristics of the lithosphere. We propose a set of forward-modelled synthetics, investigating the model response in terms of heat flow, temperature, and gravity effect at satellite altitude. The sensitivity analysis concerns the parameters involved, as heat production, thermal conductivity, density and their temperature dependence. We discuss the effect of the horizontal smoothing due to heat conduction, the superposition of the bulk thermal effect of near-surface processes (e.g. advection in ground-water and permeable faults, paleoclimatic effects, blanketing by sediments), and the out-of equilibrium conditions due to tectonic transients. All of them have the potential to distort the gravity-derived estimates.We find that the temperature-conductivity relationship has a small effect with respect to other parameter uncertainties on the modelled temperature depth variation, surface heat flow, thermal lithosphere thickness. We conclude that the global gravity is useful for geothermal studies.
Optimization of Experimental Model Parameter Identification for Energy Storage Systems
Rosario Morello
2013-09-01
Full Text Available The smart grid approach is envisioned to take advantage of all available modern technologies in transforming the current power system to provide benefits to all stakeholders in the fields of efficient energy utilisation and of wide integration of renewable sources. Energy storage systems could help to solve some issues that stem from renewable energy usage in terms of stabilizing the intermittent energy production, power quality and power peak mitigation. With the integration of energy storage systems into the smart grids, their accurate modeling becomes a necessity, in order to gain robust real-time control on the network, in terms of stability and energy supply forecasting. In this framework, this paper proposes a procedure to identify the values of the battery model parameters in order to best fit experimental data and integrate it, along with models of energy sources and electrical loads, in a complete framework which represents a real time smart grid management system. The proposed method is based on a hybrid optimisation technique, which makes combined use of a stochastic and a deterministic algorithm, with low computational burden and can therefore be repeated over time in order to account for parameter variations due to the battery’s age and usage.
Multiobjective Automatic Parameter Calibration of a Hydrological Model
Donghwi Jung
2017-03-01
Full Text Available This study proposes variable balancing approaches for the exploration (diversification and exploitation (intensification of the non-dominated sorting genetic algorithm-II (NSGA-II with simulated binary crossover (SBX and polynomial mutation (PM in the multiobjective automatic parameter calibration of a lumped hydrological model, the HYMOD model. Two objectives—minimizing the percent bias and minimizing three peak flow differences—are considered in the calibration of the six parameters of the model. The proposed balancing approaches, which migrate the focus between exploration and exploitation over generations by varying the crossover and mutation distribution indices of SBX and PM, respectively, are compared with traditional static balancing approaches (the two dices value is fixed during optimization in a benchmark hydrological calibration problem for the Leaf River (1950 km2 near Collins, Mississippi. Three performance metrics—solution quality, spacing, and convergence—are used to quantify and compare the quality of the Pareto solutions obtained by the two different balancing approaches. The variable balancing approaches that migrate the focus of exploration and exploitation differently for SBX and PM outperformed other methods.
Constraints on the parameters of the Left Right Mirror Model
Cerón, V E; Díaz-Cruz, J L; Maya, M; Ceron, Victoria E.; Cotti, Umberto; Maya, Mario
1998-01-01
We study some phenomenological constraints on the parameters of a left right model with mirror fermions (LRMM) that solves the strong CP problem. In particular, we evaluate the contribution of mirror neutrinos to the invisible Z decay width (\\Gamma_Z^{inv}), and we find that the present experimental value on \\Gamma_Z^{inv}, can be used to place an upper bound on the Z-Z' mixing angle that is consistent with limits obtained previously from other low-energy observables. In this model the charged fermions that correspond to the standard model (SM) mix with its mirror counterparts. This mixing, simultaneously with the Z-Z' one, leads to modifications of the \\Gamma(Z --> f \\bar{f}) decay width. By comparing with LEP data, we obtain bounds on the standard-mirror lepton mixing angles. We also find that the bottom quark mixing parameters can be chosen to fit the experimental values of R_b, and the resulting values for the Z-Z' mixing angle do not agree with previous bounds. However, this disagreement disappears if on...
Application of a free parameter model to plastic scintillation samples
Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)
2011-08-21
In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.
Model parameters for representative wetland plant functional groups
Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.
2017-01-01
Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in
Lee, Robert Mkw; Dickhout, Jeffrey G; Sandow, Shaun L
2017-04-01
Essential hypertension is a complex multifactorial disease process that involves the interaction of multiple genes at various loci throughout the genome, and the influence of environmental factors such as diet and lifestyle, to ultimately determine long-term arterial pressure. These factors converge with physiological signaling pathways to regulate the set-point of long-term blood pressure. In hypertension, structural changes in arteries occur and show differences within and between vascular beds, between species, models and sexes. Such changes can also reflect the development of hypertension, and the levels of circulating humoral and vasoactive compounds. The role of perivascular adipose tissue in the modulation of vascular structure under various disease states such as hypertension, obesity and metabolic syndrome is an emerging area of research, and is likely to contribute to the heterogeneity described in this review. Diversity in structure and related function is the norm, with morphological changes being causative in some beds and states, and in others, a consequence of hypertension. Specific animal models of hypertension have advantages and limitations, each with factors influencing the relevance of the model to the human hypertensive state/s. However, understanding the fundamental properties of artery function and how these relate to signalling mechanisms in real (intact) tissues is key for translating isolated cell and model data to have an impact and relevance in human disease etiology. Indeed, the ultimate aim of developing new treatments to correct vascular dysfunction requires understanding and recognition of the limitations of the methodologies used.
Walz, Yvonne; Wegmann, Martin; Leutner, Benjamin; Dech, Stefan; Vounatsou, Penelope; N'Goran, Eliézer K; Raso, Giovanna; Utzinger, Jürg
2015-01-01
Schistosomiasis is a widespread water-based disease that puts close to 800 million people at risk of infection with more than 250 million infected, mainly in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and the frequency, duration and extent of human bodies exposed to infested water sources during human water contact. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. Since schistosomiasis risk profiling based on remote sensing data inherits a conceptual drawback if school-based disease prevalence data are directly related to the remote sensing measurements extracted at the location of the school, because the disease transmission usually does not exactly occur at the school, we took the local environment around the schools into account by explicitly linking ecologically relevant environmental information of potential disease transmission sites to survey measurements of disease prevalence. Our models were validated at two sites with different landscapes in Côte d'Ivoire using high- and moderate-resolution remote sensing data based on random forest and partial least squares regression. We found that the ecologically relevant modelling approach explained up to 70% of the variation in Schistosoma infection prevalence and performed better compared to a purely pixel-based modelling approach. Furthermore, our study showed that model performance increased as a function of enlarging the school catchment area, confirming the hypothesis that suitable environments for schistosomiasis transmission rarely occur at the location of survey measurements.
Hygge, S; Ohman, A
1978-03-01
Fear-relevant (snakes, spiders, and rats) and fear-irrelevant (flowers, mushrooms, and berries) pictures were compared as conditioned and instigating stimuli in a vicarious classical conditioning paradigm with skin conductance responses as the dependent variable. A female confederate model and subject watched the pictures side by side. A female stimulus presentations, the experimenter interrupted to investigate alleged overreactions of the model to one of the stimulus classes. The model then vividly described a phobia for this object, which was to serve as a vicarious instigating stimulus. The experiment continued for a few conditioning trials, and then the experimenter announced that the disturbing stimulus would be omitted before the second part of the experiment. There was no effect of stimulus content on vicariously instigated responses, although significant overall instigation was observed. However, the responses to the stimulus that was paired with the model's phobic stimulus, that is, the vicariously conditioned responses, failed to extinguish during the second part of the experiment when it was fear-relevant but extinguished immediately when it was fear-irrelevant.
Parameter optimization in differential geometry based solvation models.
Wang, Bao; Wei, G W
2015-10-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.
Parameter Estimation in Stochastic Grey-Box Models
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...
Allowed Parameter Regions for a Tree-Level Inflation Model
MENG Xin-He
2001-01-01
The early universe inflation is well known as a promising theory to explain the origin of large-scale structure of universe and to solve the early universe pressing problems. For a reasonable inflation model, the potential during inflation must be very flat, at least, in the direction of the inflaton. To construct the inflaton potential all the known related astrophysics observations should be included. For a general tree-level hybrid inflation potential, which is notdiscussed fully so far, the parameters in it are shown how to be constrained via the astrophysics data observed and to be obtained to the expected accuracy, and to be consistent with cosmology requirements.``
Modeling systems relevant to the biodiesel production using the CPA equation of state
Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios
2016-01-01
estimated by adjusting model predictions to recent DIPPR correlations and carefully selected literature data. Then, the performance of CPA was evaluated in correlating the vapor – liquid and liquid – liquid equilibrium of binary systems containing fatty acids and their esters, glycerides, water, alcohols...... and/or glycerol. Satisfactory correlation results were obtained using one (water-acids, alcohols/water - glycerol) or two (systems containing fatty acid esters with water, alcohols or glycerol and mixtures containing glycerides and alcohols) interaction parameters. Moreover, the interaction parameters...
Empirically modelled Pc3 activity based on solar wind parameters
T. Raita
2010-09-01
Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through
Modelling of bio-optical parameters of open ocean waters
Vadim N. Pelevin
2001-12-01
Full Text Available An original method for estimating the concentration of chlorophyll pigments, absorption of yellow substance and absorption of suspended matter without pigments and yellow substance in detritus using spectral diffuse attenuation coefficient for downwelling irradiance and irradiance reflectance data has been applied to sea waters of different types in the open ocean (case 1. Using the effective numerical single parameter classification with the water type optical index m as a parameter over the whole range of the open ocean waters, the calculations have been carried out and the light absorption spectra of sea waters tabulated. These spectra are used to optimize the absorption models and thus to estimate the concentrations of the main admixtures in sea water. The value of m can be determined from direct measurements of the downward irradiance attenuation coefficient at 500 nm or calculated from remote sensing data using the regressions given in the article. The sea water composition can then be readily estimated from the tables given for any open ocean area if that one parameter m characterizing the basin is known.
Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin
2017-08-04
To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (TBDT ) were compared with the BDTs recorded in the treatment delivery log files (TLog ): ∆t = TLog -TBDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may provide
Blümel, Marcus; Hooper, Scott L; Guschlbauerc, Christoph; White, William E; Büschges, Ansgar
2012-11-01
Characterizing muscle requires measuring such properties as force-length, force-activation, and force-velocity curves. These characterizations require large numbers of data points because both what type of function (e.g., linear, exponential, hyperbolic) best represents each property, and the values of the parameters in the relevant equations, need to be determined. Only a few properties are therefore generally measured in experiments on any one muscle, and complete characterizations are obtained by averaging data across a large number of muscles. Such averaging approaches can work well for muscles that are similar across individuals. However, considerable evidence indicates that large inter-individual variation exists, at least for some muscles. This variation poses difficulties for across-animal averaging approaches. Methods to fully describe all muscle's characteristics in experiments on individual muscles would therefore be useful. Prior work in stick insect extensor muscle has identified what functions describe each of this muscle's properties and shown that these equations apply across animals. Characterizing these muscles on an individual-by-individual basis therefore requires determining only the values of the parameters in these equations, not equation form. We present here techniques that allow determining all these parameter values in experiments on single muscles. This technique will allow us to compare parameter variation across individuals and to model muscles individually. Similar experiments can likely be performed on single muscles in other systems. This approach may thus provide a widely applicable method for characterizing and modeling muscles from single experiments.
Gabere MN
2016-06-01
Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1
Breakdown parameter for kinetic modeling of multiscale gas flows.
Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao
2014-06-01
Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.
Structural Breaks, Parameter Stability and Energy Demand Modeling in Nigeria
Olusegun A. Omisakin
2012-08-01
Full Text Available This paper extends previous studies in modeling and estimating energy demand functions for both gasoline and kerosene petroleum products for Nigeria from 1977 to 2008. In contrast to earlier studies on Nigeria and other developing countries, this study specifically tests for the possibility of structural breaks/regime shifts and parameter instability in the energy demand functions using more recent and robust techniques. In addition, the study considers an alternative model specification which primarily captures the price-income interaction effects on both gasoline and kerosene demand functions. While the conventional residual-based cointegration tests employed fail to identify any meaningful long run relationship in both functions, the Gregory-Hansen structural break cointegration approach confirms the cointegration relationships despite the breakpoints. Both functions are also found to be stable under the period studied.The elasticity estimates also follow the a priori expectation being inelastic both in the long- and short run for the two functions.
Borgonovo, Emanuele
2010-03-01
In risk analysis problems, the decision-making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one-parameter-at-a-time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.
Stem cell therapy for joint problems using the horse as a clinically relevant animal model
Koch, Thomas Gadegaard; Betts, Dean H.
2007-01-01
of the developmental biology of synovial joints and their pathologies. Before human clinical trials are undertaken, stem cell-based therapies for non-life-threatening disorders should be evaluated for their safety and efficacy using animal models of spontaneous disease and not solely by the existing laboratory models...... of experimentally induced lesions. The horse lends itself as a good animal model of spontaneous joint disorders that are clinically relevant to similar human disorders. Equine stem cell and tissue engineering studies may be financially feasible to principal investigators and small biotechnology companies......Research into articular cartilage is a surprisingly recent endeavour and much remains to be learned about the normal development of the synovial joint and its components that interplay in osteoarthritis and focal cartilage defects. Stem cell research is likely to contribute to the understanding...
A clinically relevant mouse model of canine osteosarcoma with spontaneous metastasis.
Chaffee, Beth K; Allen, Matthew J
2013-01-01
Many patients with osteosarcoma (OS) will succumb to distant metastasis, often involving the lungs. Effective therapies for treating lung metastases depend on the availability of a clinically relevant pre-clinical model. Mice were surgically implanted with OS tumor fragments. The time course of primary tumor growth and subsequent spread to the lung were determined. Following development of a lytic and proliferative primary bone lesion, tumor metastasized to the lung in the majority of mice. There was no evidence of tumor at three weeks, but 10 out of 11 mice ultimately developed secondary OS in the lung within 12 weeks. Implantation of OS tumor fragments leads to the development of primary bone tumors and secondary lung metastases, recapitulating the clinical behavior of OS. This model offers an advantage over cell suspension injection models by precluding initial seeding of the lung with tumor cells.
Tosini, Gianluca; Owino, Sharon; Guillaume, Jean-Luc; Jockers, Ralf
2014-08-01
Melatonin, the neuro-hormone synthesized during the night, has recently seen an unexpected extension of its functional implications toward type 2 diabetes development, visual functions, sleep disturbances, and depression. Transgenic mouse models were instrumental for the establishment of the link between melatonin and these major human diseases. Most of the actions of melatonin are mediated by two types of G protein-coupled receptors, named MT1 and MT2 , which are expressed in many different organs and tissues. Understanding the pharmacology and function of mouse MT1 and MT2 receptors, including MT1 /MT2 heteromers, will be of crucial importance to evaluate the relevance of these mouse models for future therapeutic developments. This review will critically discuss these aspects, and give some perspectives including the generation of new mouse models.
A novel criterion for determination of material model parameters
Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.
2011-05-01
Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.
Relevant Criteria for Testing the Quality of Models for Turbulent Wind Speed Fluctuations
Frandsen, Sten Tronæs; Ejsing Jørgensen, Hans; Sørensen, John Dalsgaard
2008-01-01
-size wind turbines when seeking wind characteristics that correspond to one blade and the entire rotor, respectively. For heights exceeding 50-60 m, the gust factor increases with wind speed. For heights larger than 60-80 m, present assumptions on the value of the gust factor are significantly......Seeking relevant criteria for testing the quality of turbulence models, the scale of turbulence and the gust factor have been estimated from data and compared with predictions from first-order models of these two quantities. It is found that the mean of the measured length scales is approximately...... 10% smaller than the IEC model for wind turbine hub height levels. The mean is only marginally dependent on trends in time series. It is also found that the coefficient of variation of the measured length scales is about 50%. 3 s and 10 s preaveraging of wind speed data are relevant for megawatt...
Standard model parameters and the search for new physics
Marciano, W.J.
1988-04-01
In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs.
Optimization routine for identification of model parameters in soil plasticity
Mattsson, Hans; Klisinski, Marek; Axelsson, Kennet
2001-04-01
The paper presents an optimization routine especially developed for the identification of model parameters in soil plasticity on the basis of different soil tests. Main focus is put on the mathematical aspects and the experience from application of this optimization routine. Mathematically, for the optimization, an objective function and a search strategy are needed. Some alternative expressions for the objective function are formulated. They capture the overall soil behaviour and can be used in a simultaneous optimization against several laboratory tests. Two different search strategies, Rosenbrock's method and the Simplex method, both belonging to the category of direct search methods, are utilized in the routine. Direct search methods have generally proved to be reliable and their relative simplicity make them quite easy to program into workable codes. The Rosenbrock and simplex methods are modified to make the search strategies as efficient and user-friendly as possible for the type of optimization problem addressed here. Since these search strategies are of a heuristic nature, which makes it difficult (or even impossible) to analyse their performance in a theoretical way, representative optimization examples against both simulated experimental results as well as performed triaxial tests are presented to show the efficiency of the optimization routine. From these examples, it has been concluded that the optimization routine is able to locate a minimum with a good accuracy, fast enough to be a very useful tool for identification of model parameters in soil plasticity.
Monsivais, Diane B
2011-06-01
This article reviews the culture of biomedicine and current practices in pain management education, which often merge to create a hostile environment for effective chronic pain care. Areas of cultural tensions in chronic pain frequently involve the struggle to achieve credibility regarding one's complaints of pain (or being believed that the pain is real) and complying with pain medication protocols. The clinically relevant continuum model is presented as a framework allowing providers to approach care from an evidence-based, culturally appropriate (patient centered) perspective that takes into account the highest level of evidence available, provider expertise, and patient preferences and values. Copyright © 2011 Elsevier Inc. All rights reserved.
Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.
2016-01-01
, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling mixtures of CO2 with other hydrogen bonding fluids (only use of one interaction parameter kij or assuming cross association interactions and obtaining the relevant parameters either via...
Rodent models of ischemic stroke lack translational relevance... are baboon models the answer?
Kwiecien, Timothy D; Sy, Christopher; Ding, Yuchuan
2014-05-01
Rodent models of ischemic stroke are associated with many issues and limitations, which greatly diminish the translational potential of these studies. Recent studies demonstrate that significant differences exist between rodent and human ischemic stroke. These differences include the physical characteristics of the stroke, as well as changes in the subsequent inflammatory and molecular pathways following the acute ischemic insult. Non-human primate (NHP) models of ischemic stroke, however, are much more similar to humans. In addition to evident anatomical similarities, the physiological responses that NHPs experience during ischemic stroke are much more applicable to the human condition and thus make it an attractive model for future research. The baboon ischemic stroke model, in particular, has been studied extensively in comparison to other NHP models. Here we discuss the major shortcomings associated with rodent ischemic stroke models and provide a comparative overview of baboon ischemic stroke models. Studies have shown that baboons, although more difficult to obtain and handle, are more representative of ischemic events in humans and may have greater translational potential that can offset these deficiencies. There remain critical issues within these baboon stroke studies that need to be addressed in future investigations. The most critical issue revolves around the size and the variability of baboon ischemic stroke. Compared to rodent models, however, issues such as these can be addressed in future studies. Importantly, baboon models avoid many drawbacks associated with rodent models including vascular variability and inconsistent inflammatory responses - issues that are inherent to the species and cannot be avoided.
A flexible, interactive software tool for fitting the parameters of neuronal models
Péter eFriedrich
2014-07-01
Full Text Available The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problem of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting
Ragettli, S.; Pellicciotti, F.
2012-03-01
In the Dry Andes of central Chile, summer water resources originate mostly from snowmelt and ice melt. We use the physically based, spatially distributed hydrological model TOPKAPI to study the exchange between glaciers and climate in the upper Aconcagua River Basin during the summer season and identify the model parameters that are robust and transferable and those that are more dependent on calibration. TOPKAPI has recently been adapted to incorporate an enhanced temperature index approach for snow and ice melting. We suggest a calibration procedure that allows calibration of parameters in three steps by separating parameters governing distinct processes. We evaluate the parameters' transferability in time and in space by applying the model at two spatial scales. TOPKAPI's ability to simulate the relevant processes is tested against meteorological, ablation, and glacier runoff data measured on Juncal Norte Glacier during two glacier ablation seasons. The model was applied successfully to the climatic setting of the Dry Andes once its parameters were recalibrated. We found a clear distinction between parameters that are stable in time and those that need recalibration. The parameters of the melt model are transferable from one season to the other, while the parameters governing the extrapolation of meteorological input data and the routing of glacier meltwater need recalibration from one season to the other. Sensitivity analysis revealed that the model is most sensitive to the temperature lapse rate governing the extrapolation of air temperature from point measurements to the glacier scale and to the melt parameter that multiplies the shortwave radiation balance.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.
Variational methods to estimate terrestrial ecosystem model parameters
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Greenwood-Van Meerveld, Beverley; Prusator, Dawn K; Johnson, Anthony C
2015-06-01
Visceral pain describes pain emanating from the thoracic, pelvic, or abdominal organs. In contrast to somatic pain, visceral pain is generally vague, poorly localized, and characterized by hypersensitivity to a stimulus such as organ distension. Animal models have played a pivotal role in our understanding of the mechanisms underlying the pathophysiology of visceral pain. This review focuses on animal models of visceral pain and their translational relevance. In addition, the challenges of using animal models to develop novel therapeutic approaches to treat visceral pain will be discussed. Copyright © 2015 the American Physiological Society.
A parameter model for dredge plume sediment source terms
Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc
2017-01-01
, which is not available in all situations. For example, to allow correct representation of overflow plume dispersion in a real-time forecasting model, a fast assessment of the near-field behaviour is needed. For this reason, a semi-analytical parameter model has been developed that reproduces the near-field sediment dispersion obtained with the CFD model in a relatively accurate way. In this paper, this so-called grey-box model is presented.
Chertok, I M; Belov, A V; Abunin, A A
2012-01-01
This study aims at the early diagnostics of geoeffectiveness of coronal mass ejections (CMEs) from quantitative parameters of the accompanying EUV dimming and arcade events. We study events of the 23th solar cycle, in which major non-recurrent geomagnetic storms (GMS) with Dst <-100 nT are sufficiently reliably identified with their solar sources in the central part of the disk. Using the SOHO/EIT 195 A images and MDI magnetograms, we select significant dimming and arcade areas and calculate summarized unsigned magnetic fluxes in these regions at the photospheric level. The high relevance of this eruption parameter is displayed by its pronounced correlation with the Forbush decrease (FD) magnitude, which, unlike GMSs, does not depend on the sign of the Bz component but is determined by global characteristics of ICMEs. Correlations with the same magnetic flux in the solar source region are found for the GMS intensity (at the first step, without taking into account factors determining the Bz component near t...
Pressure pulsation in roller pumps: a validated lumped parameter model.
Moscato, Francesco; Colacino, Francesco M; Arabia, Maurizio; Danieli, Guido A
2008-11-01
During open-heart surgery roller pumps are often used to keep the circulation of blood through the patient body. They present numerous key features, but they suffer from several limitations: (a) they normally deliver uncontrolled pulsatile inlet and outlet pressure; (b) blood damage appears to be more than that encountered with centrifugal pumps. A lumped parameter mathematical model of a roller pump (Sarns 7000, Terumo CVS, Ann Arbor, MI, USA) was developed to dynamically simulate pressures at the pump inlet and outlet in order to clarify the uncontrolled pulsation mechanism. Inlet and outlet pressures obtained by the mathematical model have been compared with those measured in various operating conditions: different rollers' rotating speed, different tube occlusion rates, and different clamping degree at the pump inlet and outlet. Model results agree with measured pressure waveforms, whose oscillations are generated by the tube compression/release mechanism during the rollers' engaging and disengaging phases. Average Euclidean Error (AEE) was 20mmHg and 33mmHg for inlet and outlet pressure estimates, respectively. The normalized AEE never exceeded 0.16. The developed model can be exploited for designing roller pumps with improved performances aimed at reducing the undesired pressure pulsation.
Okawada, Manabu; Wilson, Michael W; Larsen, Scott D; Lipka, Elke; Hillfinger, John; Teitelbaum, Daniel H
2016-12-01
Blockade of the renin-angiotensin system (RAS) has been shown to alleviate inflammatory processes in the gastrointestinal tract. The aim of this study was to determine if blockade of the RAS would be effective in an immunologically relevant colitis model, and to compare outcome with an acute colitis model. A losartan analog, CCG-203025 (C23H26ClN3O5S) containing a highly polar sulfonic acid moiety that we expected would allow localized mucosal antagonism with minimal systemic absorption was selected as an angiotensin II type 1a receptor antagonist (AT1aR-A). Two colitis models were studied: (1) Acute colitis was induced in 8- to 10-week-old C57BL/6J mice by 2.5 % dextran sodium sulfate (DSS, in drinking water) for 7 days. (2) IL10-/-colitis Piroxicam (200 ppm) was administered orally in feed to 5-week-old IL-10-/-mice (C57BL/6J background) for 14 days followed by enalaprilat (ACE-I), CCG-203025 or PBS administered transanally for 14 days. In the DSS model, weight loss and histologic score for CCG-203025 were better than with placebo. In the IL10-/-model, ACE-I suppressed histologic damage better than CCG-203025. Both ACE-I and CCG-203025 reduced pro-inflammatory cytokines and chemokines. This study demonstrated the therapeutic efficacy of both ACE-I and AT1aR-A for preventing the development of both acute and immunologically relevant colitis.
Budhwani, Karim Ismail
The tremendous quality of life impact notwithstanding, cardiovascular diseases and Cancer add up to over US$ 700bn each year in financial costs alone. Aging and population growth are expected to further expand the problem space while drug research and development remain expensive. However, preclinical costs can be substantially mitigated by substituting animal models with in vitro devices that accurately model human cardiovascular transport. Here we present a novel physiologically relevant lab-on-a-brane that simulates in vivo pressure, flow, strain, and shear waveforms associated with normal and pathological conditions in large and small blood vessels for studying molecular transport across the endothelial monolayer. The device builds upon previously demonstrated integrated microfluidic loop design by: (a) introducing nanoscale pores in the substrate membrane to enable transmembrane molecular transport, (b) transforming the substrate membrane into a nanofibrous matrix for 3D smooth muscle cell (SMC) tissue culture, (c) integrating electrospinning fabrication methods, (d) engineering an invertible sandwich cell culture device architecture, and (e) devising a healthy co-culture mechanism for human arterial endothelial cell (HAEC) monolayer and multiple layers of human smooth muscle cells (HSMC) to accurately mimic arterial anatomy. Structural and mechanical characterization was conducted using confocal microscopy, SEM, stress/strain analysis, and infrared spectroscopy. Transport was characterized using FITC-Dextran hydraulic permeability protocol. Structure and transport characterization successfully demonstrate device viability as a physiologically relevant arterial mimic for testing transendothelial transport. Thus, our lab-on-a-brane provides a highly effective and efficient, yet considerably inexpensive, physiologically relevant alternative for pharmacokinetic evaluation; possibly reducing animals used in pre-clinical testing, clinical trials cost from false
Xiao-meng SONG
2013-01-01
Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.
血小板凝胶的制备方法及其影响因素★%Platelet gel preparation methods and relevant parameters
温天杨; 王爱红; 许樟荣
2013-01-01
BACKGROUND: The preparation methods of platelet gel are various, but there is no uniform standard. OBJECTIVE: To conclude the methods of platelet gel preparation and to explore the relevant parameters. METHODS: The first author searched the PubMed and Wanfang databases for relevant articles published from 1990 to 2011 using the keywords of “platelet gel, classification, parameters” in English and Chinese, respectively. RESULTS AND CONCLUSION: There are two main parameters to classify the different preparation methods, which are the yield and composition of gel and the fibrin network of gel. According to the two main parameters, the preparation methods of platelet gel can be classified into four categories, namely, pure platelet-rich plasma, leukocyte- and platelet-rich plasma, pure platelet-rich fibrin, and leukocyte- and platelet-rich fibrin. According to the preparation process, the preparation methods of PRP gel also can be divided into manual protocol and automatic protocol. There are inadequacies in al the preparation methods.% 背景：血小板凝胶制备方法繁多，分类标准不统一。目的：总结血小板凝胶制备方法，并讨论影响因素。方法：由第一作者检索1990至2011年 PubMed 数据库及万方数据库。英文检索词为“Platelet gel， Classification，Parameters”，中文检索词为“血小板凝胶，分类，影响因素”。结果与结论：依据凝胶产量与成分、凝胶中纤维蛋白结构两个主要影响因素可将血小板凝胶制备方法分为4大类，即纯富血小板血浆凝胶、富白细胞-血小板血浆凝胶、纯富血小板纤维蛋白凝胶和富白细胞-血小板纤维蛋白凝胶；根据制备流程不同，血小板凝胶的每一种制备方法还可以再分为手工制备方法和全自动制备方法，但各种分类方法均存在不足之处。
Lascaux, Franck; Fini, Luca
2015-01-01
This article aims at proving the feasibility of the forecast of all the most relevant classical atmospherical parameters for astronomical applications (wind speed and direction, temperature) above the ESO ground-base site of Cerro Paranal with a mesoscale atmospherical model called Meso-Nh. In a precedent paper we have preliminarily treated the model performances obtained in reconstructing some key atmospherical parameters in the surface layer 0-30~m studying the bias and the RMSE on a statistical sample of 20 nights. Results were very encouraging and it appeared therefore mandatory to confirm such a good result on a much richer statistical sample. In this paper, the study was extended to a total sample of 129 nights between 2007 and 2011 distributed in different parts of the solar year. This large sample made our analysis more robust and definitive in terms of the model performances and permitted us to confirm the excellent performances of the model. Besides, we present an independent analysis of the model p...
Uniqueness, scale, and resolution issues in groundwater model parameter identification
Tian-chyi J. Yeh
2015-07-01
Full Text Available This paper first visits uniqueness, scale, and resolution issues in groundwater flow forward modeling problems. It then makes the point that non-unique solutions to groundwater flow inverse problems arise from a lack of information necessary to make the problems well defined. Subsequently, it presents the necessary conditions for a well-defined inverse problem. They are full specifications of (1 flux boundaries and sources/sinks, and (2 heads everywhere in the domain at at least three times (one of which is t = 0, with head change everywhere at those times must being nonzero for transient flow. Numerical experiments are presented to corroborate the fact that, once the necessary conditions are met, the inverse problem has a unique solution. We also demonstrate that measurement noise, instability, and sensitivity are issues related to solution techniques rather than the inverse problems themselves. In addition, we show that a mathematically well-defined inverse problem, based on an equivalent homogeneous or a layered conceptual model, may yield physically incorrect and scenario-dependent parameter values. These issues are attributed to inconsistency between the scale of the head observed and that implied by these models. Such issues can be reduced only if a sufficiently large number of observation wells are used in the equivalent homogeneous domain or each layer. With a large number of wells, we then show that increase in parameterization can lead to a higher-resolution depiction of heterogeneity if an appropriate inverse methodology is used. Furthermore, we illustrate that, using the same number of wells, a highly parameterized model in conjunction with hydraulic tomography can yield better characterization of the aquifer and minimize the scale and scenario-dependent problems. Lastly, benefits of the highly parameterized model and hydraulic tomography are tested according to their ability to improve predictions of aquifer responses induced by
Finding model parameters: Genetic algorithms and the numerical modelling of quartz luminescence
Adamiec, Grzegorz [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland)]. E-mail: grzegorz.adamiec@polsl.pl; Bluszcz, Andrzej [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland); Bailey, Richard [Department of Geography, Royal Holloway, University of London, Egham, Surrey, TW20 0EX (United Kingdom); Garcia-Talavera, Marta [LIBRA, Centro I-D, Campus Miguel Delibes, 47011 Valladolid (Spain)
2006-08-15
The paper presents an application of genetic algorithms (GAs) to the problem of finding appropriate parameter values for the numerical simulation of quartz thermoluminescence (TL). We show that with the use of GAs it is possible to achieve a very good match between simulated and experimentally measured characteristics of quartz, for example the thermal activation characteristics of fired quartz. The rate equations of charge transport in the numerical model of luminescence in quartz contain a large number of parameters (trap depths, frequency factors, populations, charge capture probabilities, optical detrapping probabilities, and recombination probabilities). Given that comprehensive models consist of over 10 traps, finding model parameters proves a very difficult task. Manual parameter changes are very time consuming and allow only a limited degree of accuracy. GAs provide a semi-automatic way of finding appropriate parameters.
Y. Sun
2013-04-01
Full Text Available This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4. Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent – as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty
Impact relevance and usability of high resolution climate modeling and data
Arnott, James C. [Aspen Global Change Inst., Basalt, CO (United States)
2016-10-30
The Aspen Global Change Institute hosted a technical science workshop entitled, “Impact Relevance and Usability of High-Resolution Climate Modeling and Datasets,” on August 2-7, 2015 in Aspen, CO. Kate Calvin (Pacific Northwest National Laboratory), Andrew Jones (Lawrence Berkeley National Laboratory) and Jean-François Lamarque (NCAR) served as co-chairs for the workshop. The meeting included the participation of 29 scientists for a total of 145 participant days. Following the workshop, workshop co-chairs authored a meeting report published in Eos on April 27, 2016. Insights from the workshop directly contributed to the formation of a new DOE-supported project co-led by workshop co-chair Andy Jones. A subset of meeting participants continue to work on a publication on institutional innovations that can support the usability of high resolution modeling, among other sources of climate information.
Modeling soil detachment capacity by rill flow using hydraulic parameters
Wang, Dongdong; Wang, Zhanli; Shen, Nan; Chen, Hao
2016-04-01
The relationship between soil detachment capacity (Dc) by rill flow and hydraulic parameters (e.g., flow velocity, shear stress, unit stream power, stream power, and unit energy) at low flow rates is investigated to establish an accurate experimental model. Experiments are conducted using a 4 × 0.1 m rill hydraulic flume with a constant artificial roughness on the flume bed. The flow rates range from 0.22 × 10-3 m2 s-1 to 0.67 × 10-3 m2 s-1, and the slope gradients vary from 15.8% to 38.4%. Regression analysis indicates that the Dc by rill flow can be predicted using the linear equations of flow velocity, stream power, unit stream power, and unit energy. Dc by rill flow that is fitted to shear stress can be predicted with a power function equation. Predictions based on flow velocity, unit energy, and stream power are powerful, but those based on shear stress, especially on unit stream power, are relatively poor. The prediction based on flow velocity provides the best estimates of Dc by rill flow because of the simplicity and availability of its measurements. Owing to error in measuring flow velocity at low flow rates, the predictive abilities of Dc by rill flow using all hydraulic parameters are relatively lower in this study compared with the results of previous research. The measuring accuracy of experiments for flow velocity should be improved in future research.
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.
2012-12-01
Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root
Adaptive Unified Biased Estimators of Parameters in Linear Model
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
Evaluation of the perceptual grouping parameter in the CTVA model
Manuel Cortijo
2005-01-01
Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.
Passegger, Vera Maria; Reiners, Ansgar
2016-01-01
M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Lezhnin Sergey
2017-01-01
Full Text Available The two-temperature model of the outflow from a vessel with initial supercritical parameters of medium has been realized. The model uses thermodynamic non-equilibrium relaxation approach to describe phase transitions. Based on a new asymptotic model for computing the relaxation time, the outflow of water with supercritical initial pressure and super- and subcritical temperatures has been calculated.
Coupled 1D-2D hydrodynamic inundation model for sewer overflow: Influence of modeling parameters
Adeniyi Ganiyu Adeogun
2015-10-01
Full Text Available This paper presents outcome of our investigation on the influence of modeling parameters on 1D-2D hydrodynamic inundation model for sewer overflow, developed through coupling of an existing 1D sewer network model (SWMM and 2D inundation model (BREZO. The 1D-2D hydrodynamic model was developed for the purpose of examining flood incidence due to surcharged water on overland surface. The investigation was carried out by performing sensitivity analysis on the developed model. For the sensitivity analysis, modeling parameters, such as mesh resolution Digital Elevation Model (DEM resolution and roughness were considered. The outcome of the study shows the model is sensitive to changes in these parameters. The performance of the model is significantly influenced, by the Manning's friction value, the DEM resolution and the area of the triangular mesh. Also, changes in the aforementioned modeling parameters influence the Flood characteristics, such as the inundation extent, the flow depth and the velocity across the model domain.
Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations
Nemeth, Noel N.; Bratton, Robert L.
2011-01-01
The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.
Guo, Jing; McLeod, Poppy Lauretta
2014-01-01
Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…
Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.
2014-04-01
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
From research excellence to brand relevance: A model for higher education reputation building
Nina Overton-de Klerk
2016-05-01
Full Text Available In this article we propose a novel approach to reputation development at higher education institutions. Global reputation development at higher education institutions is largely driven by research excellence, is predominantly measured by research output, and is predominantly reflected in hierarchical university rankings. The ranking becomes equated with brand equity. We argue that the current approach to reputation development in higher education institutions is modernist and linear. This is strangely out-of-kilter with the complexities of a transforming society in flux, the demands of a diversity of stakeholders, and the drive towards transdisciplinarity, laterality, reflexivity and relevance in science. Good research clearly remains an important ingredient of a university's brand value. However, a case can be made for brand relevance, co-created in collaboration with stakeholders, as an alternative and non-linear way of differentiation. This approach is appropriate in light of challenges in strategic science globally as well as trends and shifts in the emerging paradigm of strategic communication. In applying strategic communication principles to current trends and issues in strategic science and the communication thereof, an alternative model for strategic reputation building at higher education institutions is developed.
FEM numerical model study of electrosurgical dispersive electrode design parameters.
Pearce, John A
2015-01-01
Electrosurgical dispersive electrodes must safely carry the surgical current in monopolar procedures, such as those used in cutting, coagulation and radio frequency ablation (RFA). Of these, RFA represents the most stringent design constraint since ablation currents are often more than 1 to 2 Arms (continuous) for several minutes depending on the size of the lesion desired and local heat transfer conditions at the applicator electrode. This stands in contrast to standard surgical activations, which are intermittent, and usually less than 1 Arms, but for several seconds at a time. Dispersive electrode temperature rise is also critically determined by the sub-surface skin anatomy, thicknesses of the subcutaneous and supra-muscular fat, etc. Currently, we lack fundamental engineering design criteria that provide an estimating framework for preliminary designs of these electrodes. The lack of a fundamental design framework means that a large number of experiments must be conducted in order to establish a reasonable design. Previously, an attempt to correlate maximum temperatures in experimental work with the average current density-time product failed to yield a good match. This paper develops and applies a new measure of an electrode stress parameter that correlates well with both the previous experimental data and with numerical models of other electrode shapes. The finite element method (FEM) model work was calibrated against experimental RF lesions in porcine skin to establish the fundamental principle underlying dispersive electrode performance. The results can be used in preliminary electrode design calculations, experiment series design and performance evaluation.
Modeling and parameter estimation for hydraulic system of excavator's arm
HE Qing-hua; HAO Peng; ZHANG Da-qing
2008-01-01
A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.
Relevance of the Lin's and Host hydropedological models to predict grape yield and wine quality
Costantini, E. A. C.; Pellegrini, S.; Bucelli, P.; Storchi, P.; Vignozzi, N.; Barbetti, R.; Campagnolo, S.
2009-09-01
The adoption of precision agriculture in viticulture could be greatly enhanced by the diffusion of straightforward and easy to be applied hydropedological models, able to predict the spatial variability of available soil water. The Lin's and Host hydropedological models were applied to standard soil series descriptions and hillslope position, to predict the distribution of hydrological functional units in two vineyard and their relevance for grape yield and wine quality. A three-years trial was carried out in Chianti (Central Italy) on Sangiovese. The soils of the vineyards differentiated in structure, porosity and related hydropedological characteristics, as well as in salinity. Soil spatial variability was deeply affected by earth movement carried out before vine plantation. Six plots were selected in the different hydrological functional units of the two vineyards, that is, at summit, backslope and footslope morphological positions, to monitor soil hydrology, grape production and wine quality. Plot selection was based upon a cluster analysis of local slope, topographic wetness index (TWI), and cumulative moisture up to the root limiting layer, appreciated by means of a detailed combined geophysical survey. Water content, redox processes and temperature were monitored, as well as yield, phenological phases, and chemical analysis of grapes. The isotopic ratio δ13C was measured in the wine ethanol upon harvesting to evaluate the degree of stress suffered by vines. The grapes in each plot were collected for wine making in small barrels. The wines obtained were analysed and submitted to a blind organoleptic testing. The results demonstrated that the combined application of the two hydropedological models can be used for the prevision of the moisture status of soils cultivated with grape during summertime in Mediterranean climate. As correctly foreseen by the models, the amount of mean daily transpirable soil water (TSW) during the growing season differed
Relevance of the Lin's and Host hydropedological models to predict grape yield and wine quality
E. A. C. Costantini
2009-09-01
Full Text Available The adoption of precision agriculture in viticulture could be greatly enhanced by the diffusion of straightforward and easy to be applied hydropedological models, able to predict the spatial variability of available soil water. The Lin's and Host hydropedological models were applied to standard soil series descriptions and hillslope position, to predict the distribution of hydrological functional units in two vineyard and their relevance for grape yield and wine quality. A three-years trial was carried out in Chianti (Central Italy on Sangiovese. The soils of the vineyards differentiated in structure, porosity and related hydropedological characteristics, as well as in salinity. Soil spatial variability was deeply affected by earth movement carried out before vine plantation. Six plots were selected in the different hydrological functional units of the two vineyards, that is, at summit, backslope and footslope morphological positions, to monitor soil hydrology, grape production and wine quality. Plot selection was based upon a cluster analysis of local slope, topographic wetness index (TWI, and cumulative moisture up to the root limiting layer, appreciated by means of a detailed combined geophysical survey. Water content, redox processes and temperature were monitored, as well as yield, phenological phases, and chemical analysis of grapes. The isotopic ratio δ^{13}C was measured in the wine ethanol upon harvesting to evaluate the degree of stress suffered by vines. The grapes in each plot were collected for wine making in small barrels. The wines obtained were analysed and submitted to a blind organoleptic testing.
The results demonstrated that the combined application of the two hydropedological models can be used for the prevision of the moisture status of soils cultivated with grape during summertime in Mediterranean climate. As correctly foreseen by the models, the amount of mean daily transpirable soil water (TSW during
The big seven model of personality and its relevance to personality pathology.
Simms, Leonard J
2007-02-01
Proponents of the Big Seven model of personality have suggested that Positive Valence (PV) and Negative Valence (NV) are independent of the Big Five personality dimensions and may be particularly relevant to personality disorder. These hypotheses were tested with 403 undergraduates who completed a Big Seven measure and markers of the Big Five and personality pathology. Results revealed that PV and NV incrementally predicted personality pathology dimensions beyond those predicted by multiple markers of the Big Five. However, factor analyses suggested that PV and NV might be best understood as specific, maladaptive aspects of positive emotionality and low agreeableness, respectively, as opposed to independent factors of personality. Implications for the description of normal and abnormal personality are discussed.
Hense, Inga; Stemmler, Irene; Sonntag, Sebastian
2017-01-01
The current generation of marine biogeochemical modules in Earth system models (ESMs) considers mainly the effect of marine biota on the carbon cycle. We propose to also implement other biologically driven mechanisms in ESMs so that more climate-relevant feedbacks are captured. We classify these mechanisms in three categories according to their functional role in the Earth system: (1) biogeochemical pumps, which affect the carbon cycling; (2) biological gas and particle shuttles, which affect the atmospheric composition; and (3) biogeophysical mechanisms, which affect the thermal, optical, and mechanical properties of the ocean. To resolve mechanisms from all three classes, we find it sufficient to include five functional groups: bulk phyto- and zooplankton, calcifiers, and coastal gas and surface mat producers. We strongly suggest to account for a larger mechanism diversity in ESMs in the future to improve the quality of climate projections.
Assigning probability distributions to input parameters of performance assessment models
Mishra, Srikanta [INTERA Inc., Austin, TX (United States)
2002-02-01
This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.
Grandvuinet, Anne Sophie; Vestergaard, Henrik Tang; Rapin, Nicolas; Steffansen, Bente
2012-11-01
This review provides an overview of intestinal human transporters for organic anions and stresses the need for standardization of the various in-vitro methods presently employed in drug-drug interaction (DDI) investigations. Current knowledge on the intestinal expression of the apical sodium-dependent bile acid transporter (ASBT), the breast cancer resistance protein (BCRP), the monocarboxylate transporters (MCT) 1, MCT3-5, the multidrug resistance associated proteins (MRP) 1-6, the organic anion transporting polypetides (OATP) 2B1, 1A2, 3A1 and 4A1, and the organic solute transporter α/β (OSTα/β) has been covered along with an overview of their substrates and inhibitors. Furthermore, the many challenges in predicting clinically relevant DDIs from in-vitro studies have been discussed with focus on intestinal transporters and the various methods for deducting in-vitro parameters for transporters (K(m) /K(i) /IC50, efflux ratio). The applicability of using a cut-off value (estimated based on the intestinal drug concentration divided by the K(i) or IC50) has also been considered. A re-evaluation of the current approaches for the prediction of DDIs is necessary when considering the involvement of other transporters than P-glycoprotein. Moreover, the interplay between various processes that a drug is subject to in-vivo such as translocation by several transporters and dissolution should be considered. © 2012 The Authors. JPP © 2012 Royal Pharmaceutical Society.
H. Wan
2014-09-01
Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of
Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.
2014-09-01
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high
Abu Husain, Nurulakmar; Haddad Khodaparast, Hamed; Ouyang, Huajiang
2012-10-01
Parameterisation in stochastic problems is a major issue in real applications. In addition, complexity of test structures (for example, those assembled through laser spot welds) is another challenge. The objective of this paper is two-fold: (1) stochastic uncertainty in two sets of different structures (i.e., simple flat plates, and more complicated formed structures) is investigated to observe how updating can be adequately performed using the perturbation method, and (2) stochastic uncertainty in a set of welded structures is studied by using two parameter weighting matrix approaches. Different combinations of parameters are explored in the first part; it is found that geometrical features alone cannot converge the predicted outputs to the measured counterparts, hence material properties must be included in the updating process. In the second part, statistical properties of experimental data are considered and updating parameters are treated as random variables. Two weighting approaches are compared; results from one of the approaches are in very good agreement with the experimental data and excellent correlation between the predicted and measured covariances of the outputs is achieved. It is concluded that proper selection of parameters in solving stochastic updating problems is crucial. Furthermore, appropriate weighting must be used in order to obtain excellent convergence between the predicted mean natural frequencies and their measured data.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-01
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
2015-01-01
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
DeAyala, R. J.; Koch, William R.
A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…
Polesel, Fabio; Lehnberg, Kai; Dott, Wolfgang
. In a previous study [1], a daily systematic reduction of ciprofloxacin removal in a full-scale WWTP (Bekkelaget, Norway) was associated to deteriorated sorption. Therefore, in this study we further investigated the sorption of ciprofloxacin onto activated sludge at laboratory- and full-scale. Targeted batch...... of ciprofloxacin in a full-scale activated sludge system. Sorption was described by linear kinetics and, in an extended version of ASM-X, using a Freundlich-based submodel. In the latter case, Freundlich parameter values estimated from the batch experiments were used for model calibration. The prediction accuracy...... was statistically evaluated in the two cases by comparing the model output with measured data. Batch experiments showed that maximum sorption capacity occurred at pH=7.4, corresponding to the isoelectric point of ciprofloxacin. A pH increase resulted in a significant reduction of sorption capacity as compared...
Depre, Christophe E-mail: cdepre@heart.med.uth.tmc.edu
1998-11-01
Isolated heart preparations are used to study physiological and metabolic parameters of the heart independently of its environment. Several preparations of isolated perfused heart are currently used, mainly the retrograde perfusion system and the working heart model. Both models allow investigations of the metabolic regulation of the heart in various physiological conditions (changes in workload, hormonal influences, substrate competition). These systems may also reproduce different pathological conditions, such as ischemia, reperfusion and hypoxia. Quantitation of metabolic activity can be performed with specific radioactive tracers. Finally, the effects of various drugs on cardiac performance and resistance to ischemia can be studied as well. Heart perfusion also revealed efficient methods to determine the tracer/tracee relation for radioisotopic analogues used with Positron Emission Tomography.
Rock thermal conductivity as key parameter for geothermal numerical models
Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele
2013-04-01
The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity
Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters
Caraballo, R.
2016-11-01
According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.
Inducible mouse models illuminate parameters influencing epigenetic inheritance.
Wan, Mimi; Gu, Honggang; Wang, Jingxue; Huang, Haichang; Zhao, Jiugang; Kaundal, Ravinder K; Yu, Ming; Kushwaha, Ritu; Chaiyachati, Barbara H; Deerhake, Elizabeth; Chi, Tian
2013-02-01
Environmental factors can stably perturb the epigenome of exposed individuals and even that of their offspring, but the pleiotropic effects of these factors have posed a challenge for understanding the determinants of mitotic or transgenerational inheritance of the epigenetic perturbation. To tackle this problem, we manipulated the epigenetic states of various target genes using a tetracycline-dependent transcription factor. Remarkably, transient manipulation at appropriate times during embryogenesis led to aberrant epigenetic modifications in the ensuing adults regardless of the modification patterns, target gene sequences or locations, and despite lineage-specific epigenetic programming that could reverse the epigenetic perturbation, thus revealing extraordinary malleability of the fetal epigenome, which has implications for 'metastable epialleles'. However, strong transgenerational inheritance of these perturbations was observed only at transgenes integrated at the Col1a1 locus, where both activating and repressive chromatin modifications were heritable for multiple generations; such a locus is unprecedented. Thus, in our inducible animal models, mitotic inheritance of epigenetic perturbation seems critically dependent on the timing of the perturbation, whereas transgenerational inheritance additionally depends on the location of the perturbation. In contrast, other parameters examined, particularly the chromatin modification pattern and DNA sequence, appear irrelevant.
Eroboghene H Otete
Full Text Available INTRODUCTION: Mathematical modelling of Clostridium difficile infection dynamics could contribute to the optimisation of strategies for its prevention and control. The objective of this systematic review was to summarise the available literature specifically identifying the quantitative parameters required for a compartmental mathematical model of Clostridium difficile transmission. METHODS: Six electronic healthcare databases were searched and all screening, data extraction and study quality assessments were undertaken in duplicate. Results were synthesised using a narrative approach. RESULTS: Fifty-four studies met the inclusion criteria. Reproduction numbers for hospital based epidemics were described in two studies with a range from 0.55 to 7. Two studies provided consistent data on incubation periods. For 62% of cases, symptoms occurred in less than 4 weeks (3-28 days after infection. Evidence on contact patterns was identified in four studies but with limited data reported for populating a mathematical model. Two studies, including one without clinically apparent donor-recipient pairs, provided information on serial intervals for household or ward contacts, showing transmission intervals of <1 week in ward based contacts compared to up to 2 months for household contacts. Eight studies reported recovery rates of between 75%-100% for patients who had been treated with either metronidazole or vancomycin. Forty-nine studies gave recurrence rates of between 3% and 49% but were limited by varying definitions of recurrence. No study was found which specifically reported force of infection or net reproduction numbers. CONCLUSIONS: There is currently scant literature overtly citing estimates of the parameters required to inform the quantitative modelling of Clostridium difficile transmission. Further high quality studies to investigate transmission parameters are required, including through review of published epidemiological studies where these
Parameter and State Estimator for State Space Models
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter Identification on Lumped Parameters of the Hydraulic Engine Mount Model
Li Qian
2016-01-01
Full Text Available Hydraulic Engine Mounts (HEM are important vibration isolation components with compound structure in the vehicle powertrain mounting system. They have the characteristic that large damping and high dynamic stiffness in the high frequency region, and small damping and low dynamic stiffness in the low frequency region, which can meet the requirements of the vehicle powertrain mounting system better. The method to identify the lumped parameters of the HEM is not only the necessary work for the analysis and calculation in dynamic performance and can also provide the theory for the performance optimization and structure optimization of product in the future. The parameter identification method based on coupled fluid-structure interaction (FSI and finite element analysis (FEA was established in this study to identify the equivalent piston area of the rubber spring, the volume stiffness of the upper chamber, as well as the inertia coefficient and damping coefficient of the liquid through the inertia track. The simulated dynamic characteristic curves of the HEM with the parameters identified are in accordance with the measured dynamic characteristic curves well.
Carcinogenically relevant split dose repair increased with age in rat skin model.
Burns, Fredric; Tang, Moon-Shong Eric; Wu, Feng; Uddin, Ahmed
2012-07-01
These experiments utilize cancer induction to evaluate cancer-relevant repair during the interval between dose fractions. Low LET electron radiation(LET ~ 0.34 keV/u) were utilized in experiments that involved exposing rat dorsal skin to 2 equal 8 Gy dose fractions separated at various intervals from 0.25 h to 24 h. Cancer onset was established for 80 weeks after the exposures and only histologically verified cancers were included in the analysis. This experiment involved a total of 540 rats and 880 induced cancers. In the youngest rats (irradiated at 28 days of age) the cancer yield declined with a halftime of approximately 3.5 hrs. In 113 day old rats the cancer yield halftime was shortened to 1.3 hrs. In the oldest rats (182 days of age), the halftime could not be established quantitatively, because it was less than the shortest interval (15 min) utilized in the protocol (best estimate ~5 min). In the oldest rats the cancer yields for all fractionated exposures dropped essentially to the expected level of 2 single fractions, below which theoretically no further reduction is possible. The follow-up times for obtaining cancer yields were the same for all exposure groups in spite of the differing ages at exposure. These results indicate that repair of carcinogenically-relevant damage accelerates with age of the rat. No information is available on the possible mechanistic basis for this finding, although the model might be useful for delineating which of the many postulated split dose repair pathways is the correct one. The finding indicates that older rats should be less susceptible to the carcinogenic action of single doses of low LET radiation in comparison to younger rats, which has been verified in separate studies.
Immune Memory and Exhaustion: Clinically Relevant Lessons from the LCMV Model.
Zehn, D; Wherry, E J
2015-01-01
The development of dysfunctional or exhausted T cells is characteristic of immune responses to chronic viral infections and cancer. Exhausted T cells are defined by reduced effector function, sustained upregulation of multiple inhibitory receptors, an altered transcriptional program and perturbations of normal memory development and homeostasis. This review focuses on (a) illustrating milestone discoveries that led to our present understanding of T cell exhaustion, (b) summarizing recent developments in the field, and (c) identifying new challenges for translational research. Exhausted T cells are now recognized as key therapeutic targets in human infections and cancer. Much of our knowledge of the clinically relevant process of exhaustion derives from studies in the mouse model of Lymphocytic choriomeningitis virus (LCMV) infection. Studies using this model have formed the foundation for our understanding of human T cell memory and exhaustion. We will use this example to discuss recent advances in our understanding of T cell exhaustion and illustrate the value of integrated mouse and human studies and will emphasize the benefits of bi-directional mouse-to-human and human-to-mouse research approaches.
Cheryl M. McCormick
2017-02-01
Full Text Available Elevations in glucocorticoids that result from environmental stressors can have programming effects on brain structure and function when the exposure occurs during sensitive periods that involve heightened neural development. In recent years, adolescence has gained increasing attention as another sensitive period of development, a period in which pubertal transitions may increase the vulnerability to stressors. There are similarities in physical and behavioural development between humans and rats, and rats have been used effectively as an animal model of adolescence and the unique plasticity of this period of ontogeny. This review focuses on benefits and challenges of rats as a model for translational research on hypothalamic-pituitary-adrenal (HPA function and stressors in adolescence, highlighting important parallels and contrasts between adolescent rats and humans, and we review the main stress procedures that are used in investigating HPA stress responses and their consequences in adolescence in rats. We conclude that a greater focus on timing of puberty as a factor in research in adolescent rats may increase the translational relevance of the findings.
Is tail vein injection a relevant breast cancer lung metastasis model?
Rashid, Omar M.; Nagahashi, Masayuki; Ramachandran, Suburamaniam; Dumur, Catherine I.; Schaum, Julia C.; Yamada, Akimitsu; Aoyagi, Tomoyoshi; Milstien, Sheldon; Spiegel, Sarah
2013-01-01
Background Two most commonly used animal models for studying breast cancer lung metastasis are: lung metastasis after orthotopic implantation of cells into the mammary gland, and lung implantations produced after tail vein (TV) injection of cells. Tail vein injection can produce lung lesions faster, but little has been studied regarding the differences between these tumors, thus, we examined their morphology and gene expression profiles. Methods Syngeneic murine mammary adenocarcinoma, 4T1-luc2 cells, were implanted either subcutaneously (Sq), orthotopically (OS), or injected via TV in Balb/c mice. Genome-wide microarray analyses of cultured 4T1 cells, Sq tumor, OS tumor, lung metastases after OS (LMet), and lung tumors after TV (TVt) were performed 10 days after implantation. Results Bioluminescence analysis demonstrated different morphology of metastases between LMet and TVt, confirmed by histology. Gene expression profile of cells were significantly different from tumors, OS, Sq, TVt or LMet (10,350 probe sets; FDR≤1%; P1.5-fold-change; P<0.01), with no significant difference between TVt and LMet. Conclusions There were significant differences between the gene profiles of cells in culture and OS versus LMet, but there were no differences between LMet versus TVt. Therefore, the lung tumor generated by TVt can be considered genetically similar to those produced after OS, and thus TVt is a relevant model for breast cancer lung metastasis. PMID:23991292
A clinically relevant model of perinatal global ischemic brain damage in rats.
Yang, Ting; Zhuang, Lei; Terrando, Niccolò; Wu, Xinmin; Jonhson, Mark R; Maze, Mervyn; Ma, Daqing
2011-04-06
We have designed a clinically relevant model of perinatal asphyxia providing intrapartum hypoxia in rats. On gestation day 22 SD rats were anesthetized and the uterine horns were exteriorized and placed in a water bath at 37°C for up to 20min. After this, pups were delivered from the uterus and manually stimulated to initiate breathing in an incubator at 37°C for 1 h in air. Brains were harvested and stained with cresyl violet, caspase-3, and TUNEL to detect morphological and apoptotic changes on postnatal days (PND) 1, 3, and 7. Separate cohorts were maintained until PND 50 and tested for learning and memory using Morris water maze (WM). Survival rate was decreased with longer hypoxic time, and 100% mortality was noted when hypoxia time was beyond 18min. Apoptosis was increased with the duration of hypoxia with neuronal loss and cell shrinkage in the CA1 of hippocampus. The time taken for the juveniles to locate the hidden platform during WM was increased in animals subjected to hypoxia. These data demonstrate that perinatal ischemic injury leads to neuronal death in the hippocampus and long-lasting cognitive dysfunction. This model mimics hypoxic ischemic encephalopathy in humans and may be appropriate for investigating therapeutic interventions. Copyright © 2011 Elsevier B.V. All rights reserved.
Quantifying the relevance of adaptive thermal comfort models in moderate thermal climate zones
Hoof, Joost van; Hensen, Jan L.M. [Faculty of Architecture, Building and Planning, Technische Universiteit Eindhoven, Vertigo 6.18, P.O. Box 513, 5600 MB Eindhoven (Netherlands)
2007-01-15
Standards governing thermal comfort evaluation are on a constant cycle of revision and public review. One of the main topics being discussed in the latest round was the introduction of an adaptive thermal comfort model, which now forms an optional part of ASHRAE Standard 55. Also on a national level, adaptive thermal comfort guidelines come into being, such as in the Netherlands. This paper discusses two implementations of the adaptive comfort model in terms of usability and energy use for moderate maritime climate zones by means of literature study, a case study comprising temperature measurements, and building performance simulation. It is concluded that for moderate climate zones the adaptive model is only applicable during summer months, and can reduce energy for naturally conditioned buildings. However, the adaptive thermal comfort model has very limited application potential for such climates. Additionally we suggest a temperature parameter with a gradual course to replace the mean monthly outdoor air temperature to avoid step changes in optimum comfort temperatures. (author)
Estimation of Model and Parameter Uncertainty For A Distributed Rainfall-runoff Model
Engeland, K.
The distributed rainfall-runoff model Ecomag is applied as a regional model for nine catchments in the NOPEX area in Sweden. Ecomag calculates streamflow on a daily time resolution. The posterior distribution of the model parameters is conditioned on the observed streamflow in all nine catchments, and calculated using Bayesian statistics. The distribution is estimated by Markov chain Monte Carlo (MCMC). The Bayesian method requires a definition of the likelihood of the parameters. Two alter- native formulations are used. The first formulation is a subjectively chosen objective function describing the goodness of fit between the simulated and observed streamflow as it is used in the GLUE framework. The second formulation is to use a more statis- tically correct likelihood function that describes the simulation errors. The simulation error is defined as the difference between log-transformed observed and simulated streamflows. A statistical model for the simulation errors is constructed. Some param- eters are dependent on the catchment, while others depend on climate. The statistical and the hydrological parameters are estimated simultaneously. Confidence intervals, due to the uncertainty of the Ecomag parameters, for the simulated streamflow are compared for the two likelihood functions. Confidence intervals based on the statis- tical model for the simulation errors are also calculated. The results indicate that the parameter uncertainty depends on the formulation of the likelihood function. The sub- jectively chosen likelihood function gives relatively wide confidence intervals whereas the 'statistical' likelihood function gives more narrow confidence intervals. The statis- tical model for the simulation errors indicates that the structural errors of the model are as least as important as the parameter uncertainty.
无
2003-01-01
Based on the phase diagrams and the mass action law in combination with the coexistence theory of metallic melts structure, the calculation model of mass action concentration for Mg-Al, Sr-Al and Ba-Al was built, and their thermodynamic parameters were determined. The agreement between calculated and measured results shows that the model and the determined thermodynamic parameters can reflect the structural characteristics of relevant melts. However, the fact that the thermodynamic parameters from literature don′t give the value agree with the measured result may be due to unconformity of these parameters to real chemical reactions in metallic melts.
A Note on the Item Information Function of the Four-Parameter Logistic Model
Magis, David
2013-01-01
This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…
A Note on the Item Information Function of the Four-Parameter Logistic Model
Magis, David
2013-01-01
This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…
Sharma, Sanjay
2017-01-01
This book provides a detailed overview of various parameters/factors involved in inventory analysis. It especially focuses on the assessment and modeling of basic inventory parameters, namely demand, procurement cost, cycle time, ordering cost, inventory carrying cost, inventory stock, stock out level, and stock out cost. In the context of economic lot size, it provides equations related to the optimum values. It also discusses why the optimum lot size and optimum total relevant cost are considered to be key decision variables, and uses numerous examples to explain each of these inventory parameters separately. Lastly, it provides detailed information on parameter estimation for different sectors/products. Written in a simple and lucid style, it offers a valuable resource for a broad readership, especially Master of Business Administration (MBA) students.
Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction
Deshpande, Manohar D.; Cravey, Robin L.
2002-01-01
A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.
Alexandra L Whittaker
Full Text Available Chemotherapy-induced intestinal mucositis is characterized by pain and a pro-inflammatory tissue response. Rat models are frequently used in mucositis disease investigations yet little is known about the presence of pain in these animals, the ability of analgesics to ameliorate the condition, or the effect that analgesic administration may have on study outcomes. This study investigated different classes of analgesics with the aim of determining their analgesic effects and impact on research outcomes of interest in a rat model of mucositis. Female DA rats were allocated to 8 groups to include saline and chemotherapy controls (n = 8. Analgesics included opioid derivatives (buprenorphine; 0.05mg/kg and tramadol 12.5mg/kg and NSAID (carprofen; 15mg/kg in combination with either saline or 5-Fluorouracil (5-FU; 150mg/kg. Research outcome measures included daily clinical parameters, pain score and gut histology. Myeloperoxidase assay was performed to determine gut inflammation. At the dosages employed, all agents had an analgesic effect based on behavioural pain scores. Jejunal myeloperoxidase activity was significantly reduced by buprenorphine and tramadol in comparison to 5-FU control animals (53%, p = 0.0004 and 58%, p = 0.0001. Carprofen had no ameliorating effect on myeloperoxidase levels. None of the agents reduced the histological damage caused by 5-FU administration although tramadol tended to increase villus length even when administered to healthy animals. These data provide evidence that carprofen offers potential as an analgesic in this animal model due to its pain-relieving efficacy and minimal effect on measured parameters. This study also supports further investigation into the mechanism and utility of opioid agents in the treatment of chemotherapy-induced mucositis.
Kurutz, U.; Friedl, R.; Fantz, U.
2017-07-01
Caesium (Cs) is applied in high power negative hydrogen ion sources to reduce a converter surface’s work function and thus enabling an efficient negative ion surface formation. Inherent drawbacks with the usage of this reactive alkali metal motivate the search for Cs-free alternative materials for neutral beam injection systems in fusion research. In view of a future DEMOnstration power plant, a suitable material should provide a high negative ion formation efficiency and comply with the RAMI issues of the system: reliability, availability, maintainability, inspectability. Promising candidates, like low work function materials (molybdenum doped with lanthanum (MoLa) and LaB6), as well as different non-doped and boron-doped diamond samples were investigated in this context at identical and ion source relevant parameters at the laboratory experiment HOMER. Negative ion densities were measured above the samples by means of laser photodetachment and compared with two reference cases: pure negative ion volume formation with negative ion densities of about 1× {10}15 {{{m}}}-3 and the effect of H- surface production using an in situ caesiated stainless steel sample which yields 2.5 times higher densities. Compared to pure volume production, none of the diamond samples did exhibit a measurable increase in H- densities, while showing clear indications of plasma-induced erosion. In contrast, both MoLa and LaB6 produced systematically higher densities (MoLa: ×1.60 LaB6: ×1.43). The difference to caesiation can be attributed to the higher work functions of MoLa and LaB6 which are expected to be about 3 eV for both compared to 2.1 eV of a caesiated surface.
Meyer, Jerrold S; Hamel, Amanda F
2014-01-01
Stressful life events have been linked to the onset of severe psychopathology and endocrine dysfunction in many patients. Moreover, vulnerability to the later development of such disorders can be increased by stress or adversity during development (e.g., childhood neglect, abuse, or trauma). This review discusses the methodological features and results of various models of stress in nonhuman primates in the context of their potential relevance for human psychopathology and endocrine dysfunction, particularly mood disorders and dysregulation of the hypothalamic-pituitary-adrenocortical (HPA) system. Such models have typically examined the effects of stress on the animals' behavior, endocrine function (primarily the HPA and hypothalamic-pituitary-gonadal systems), and, in some cases, immune status. Manipulations such as relocation and/or removal of an animal from its current social group or, alternatively, formation of a new social group can have adverse effects on all of these outcome measures that may be either transient or more persistent depending on the species, sex, and other experimental conditions. Social primates may also experience significant stress associated with their rank in the group's dominance hierarchy. Finally, stress during prenatal development or during the early postnatal period may have long-lasting neurobiological and endocrine effects that manifest in an altered ability to cope behaviorally and physiologically with later challenges. Whereas early exposure to severe stress usually results in deficient coping abilities, certain kinds of milder stressors can promote subsequent resilience in the animal. We conclude that studies of stress in nonhuman primates can model many features of stress exposure in human populations and that such studies can play a valuable role in helping to elucidate the mechanisms underlying the role of stress in human psychopathology and endocrine dysfunction.
Éric Fanchon
2012-08-01
Full Text Available This paper presents a novel framework for the modeling of biological networks. It makes use of recent tools analyzing the robust satisfaction of properties of (hybrid dynamical systems. The main challenge of this approach as applied to biological systems is to get access to the relevant parameter sets despite gaps in the available knowledge. An initial estimate of useful parameters was sought by formalizing the known behavior of the biological network in the STL logic using the tool Breach. Then, once a set of parameter values consistent with known biological properties was found, we tried to locally expand it into the largest possible valid region. We applied this methodology in an effort to model and better understand the complex network regulating iron homeostasis in mammalian cells. This system plays an important role in many biological functions, including erythropoiesis, resistance against infections, and proliferation of cancer cells.