Calibration of a stochastic health evolution model using NHIS data
Gupta, Aparna; Li, Zhisheng
2011-10-01
This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
Stochastic calibration and learning in nonstationary hydroeconomic models
Maneta, M. P.; Howitt, R.
2014-05-01
Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.
Energy Technology Data Exchange (ETDEWEB)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Haberlandt, U.; Radtke, I.
2014-01-01
Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Lanchier, Nicolas
2017-01-01
Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...
Stochastic forward and inverse groundwater flow and solute transport modeling
Janssen, G.M.C.M.
2008-01-01
Keywords: calibration, inverse modeling, stochastic modeling, nonlinear biodegradation, stochastic-convective, advective-dispersive, travel time, network design, non-Gaussian distribution, multimodal distribution, representers
This thesis offers three new approaches that contribute
Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.
2013-12-01
Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of
Greenwood, Priscilla E
2016-01-01
This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...
Campo, M. A.; Lopez, J. J.; Rebole, J. P.
2012-04-01
This work was carried out in north of Spain. San Sebastian A meteorological station, where there are available precipitation records every ten minutes was selected. Precipitation data covers from October of 1927 to September of 1997. Pulse models describe the temporal process of rainfall as a succession of rainy cells, main storm, whose origins are distributed in time according to a Poisson process and a secondary process that generates a random number of cells of rain within each storm. Among different pulse models, the Bartlett-Lewis was used. On the other hand, alternative renewal processes and Markov chains describe the way in which the process will evolve in the future depending only on the current state. Therefore they are nor dependant on past events. Two basic processes are considered when describing the occurrence of rain: the alternation of wet and dry periods and temporal distribution of rainfall in each rain event, which determines the rainwater collected in each of the intervals that make up the rain. This allows the introduction of alternative renewal processes and Markov chains of three states, where interstorm time is given by either of the two dry states, short or long. Thus, the stochastic model of Markov chains tries to reproduce the basis of pulse models: the succession of storms, each one composed for a series of rain, separated by a short interval of time without theoretical complexity of these. In a first step, we analyzed all variables involved in the sequential process of the rain: rain event duration, event duration of non-rain, average rainfall intensity in rain events, and finally, temporal distribution of rainfall within the rain event. Additionally, for pulse Bartlett-Lewis model calibration, main descriptive statistics were calculated for each month, considering the process of seasonal rainfall in each month. In a second step, both models were calibrated. Finally, synthetic series were simulated with calibration parameters; series
Calibration and simulation of Heston model
Directory of Open Access Journals (Sweden)
Mrázek Milan
2017-05-01
Full Text Available We calibrate Heston stochastic volatility model to real market data using several optimization techniques. We compare both global and local optimizers for different weights showing remarkable differences even for data (DAX options from two consecutive days. We provide a novel calibration procedure that incorporates the usage of approximation formula and outperforms significantly other existing calibration methods.
Model Calibration in Option Pricing
Directory of Open Access Journals (Sweden)
Andre Loerx
2012-04-01
Full Text Available We consider calibration problems for models of pricing derivatives which occur in mathematical finance. We discuss various approaches such as using stochastic differential equations or partial differential equations for the modeling process. We discuss the development in the past literature and give an outlook into modern approaches of modelling. Furthermore, we address important numerical issues in the valuation of options and likewise the calibration of these models. This leads to interesting problems in optimization, where, e.g., the use of adjoint equations or the choice of the parametrization for the model parameters play an important role.
Electricity price modeling with stochastic time change
International Nuclear Information System (INIS)
Borovkova, Svetlana; Schmeck, Maren Diane
2017-01-01
In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.
Stochastic modelling of turbulence
DEFF Research Database (Denmark)
Sørensen, Emil Hedevang Lohse
previously been shown to be closely connected to the energy dissipation. The incorporation of the small scale dynamics into the spatial model opens the door to a fully fledged stochastic model of turbulence. Concerning the interaction of wind and wind turbine, a new method is proposed to extract wind turbine...
Stochastic Control - External Models
DEFF Research Database (Denmark)
Poulsen, Niels Kjølstad
2005-01-01
This note is devoted to control of stochastic systems described in discrete time. We are concerned with external descriptions or transfer function model, where we have a dynamic model for the input output relation only (i.e.. no direct internal information). The methods are based on LTI systems...
Stochasticity Modeling in Memristors
Naous, Rawan; Al-Shedivat, Maruan; Salama, Khaled N.
2015-01-01
Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.
Stochasticity Modeling in Memristors
Naous, Rawan
2015-10-26
Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.
Identifiability in stochastic models
1992-01-01
The problem of identifiability is basic to all statistical methods and data analysis, occurring in such diverse areas as Reliability Theory, Survival Analysis, and Econometrics, where stochastic modeling is widely used. Mathematics dealing with identifiability per se is closely related to the so-called branch of ""characterization problems"" in Probability Theory. This book brings together relevant material on identifiability as it occurs in these diverse fields.
Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P
2008-01-01
Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...
Stochastic ontogenetic growth model
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
International Nuclear Information System (INIS)
Ahlers, C.F.; Liu, H.H.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
International Nuclear Information System (INIS)
Ahlers, C.; Liu, H.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
Alternative Asymmetric Stochastic Volatility Models
M. Asai (Manabu); M.J. McAleer (Michael)
2010-01-01
textabstractThe stochastic volatility model usually incorporates asymmetric effects by introducing the negative correlation between the innovations in returns and volatility. In this paper, we propose a new asymmetric stochastic volatility model, based on the leverage and size effects. The model is
Stochastic Models of Polymer Systems
2016-01-01
Distribution Unlimited Final Report: Stochastic Models of Polymer Systems The views, opinions and/or findings contained in this report are those of the...ADDRESS. Princeton University PO Box 0036 87 Prospect Avenue - 2nd floor Princeton, NJ 08544 -2020 14-Mar-2014 ABSTRACT Number of Papers published in...peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Stochastic Models of Polymer Systems Report Title
International Nuclear Information System (INIS)
Ghezzehej, T.
2004-01-01
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency
Stochastic Still Water Response Model
DEFF Research Database (Denmark)
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2002-01-01
In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model is...... out that an important parameter of the stochastic cargo field model is the mean number of containers delivered by each customer.......In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model...... is to establish the stochastic load field conditional on a given draft and trim of the vessel. The model contributes to a realistic modelling of the stochastic load processes to be used in a reliability evaluation of the ship hull. Emphasis is given to container vessels. The formulation of the model for obtaining...
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.
2011-05-12
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim
2011-01-01
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Stochastic models: theory and simulation.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Stochastic models of cell motility
DEFF Research Database (Denmark)
Gradinaru, Cristian
2012-01-01
Cell motility and migration are central to the development and maintenance of multicellular organisms, and errors during this process can lead to major diseases. Consequently, the mechanisms and phenomenology of cell motility are currently under intense study. In recent years, a new...... interdisciplinary field focusing on the study of biological processes at the nanoscale level, with a range of technological applications in medicine and biological research, has emerged. The work presented in this thesis is at the interface of cell biology, image processing, and stochastic modeling. The stochastic...... models introduced here are based on persistent random motion, which I apply to real-life studies of cell motility on flat and nanostructured surfaces. These models aim to predict the time-dependent position of cell centroids in a stochastic manner, and conversely determine directly from experimental...
Stochastic Modelling of Hydrologic Systems
DEFF Research Database (Denmark)
Jonsdottir, Harpa
2007-01-01
In this PhD project several stochastic modelling methods are studied and applied on various subjects in hydrology. The research was prepared at Informatics and Mathematical Modelling at the Technical University of Denmark. The thesis is divided into two parts. The first part contains...... an introduction and an overview of the papers published. Then an introduction to basic concepts in hydrology along with a description of hydrological data is given. Finally an introduction to stochastic modelling is given. The second part contains the research papers. In the research papers the stochastic methods...... are described, as at the time of publication these methods represent new contribution to hydrology. The second part also contains additional description of software used and a brief introduction to stiff systems. The system in one of the papers is stiff....
Stochastic Modelling of River Geometry
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Schaarup-Jensen, K.
1996-01-01
Numerical hydrodynamic river models are used in a large number of applications to estimate critical events for rivers. These estimates are subject to a number of uncertainties. In this paper, the problem to evaluate these estimates using probabilistic methods is considered. Stochastic models for ...... for river geometries are formulated and a coupling between hydraulic computational methods and numerical reliability methods is presented....
Observation models in radiocarbon calibration
International Nuclear Information System (INIS)
Jones, M.D.; Nicholls, G.K.
2001-01-01
The observation model underlying any calibration process dictates the precise mathematical details of the calibration calculations. Accordingly it is important that an appropriate observation model is used. Here this is illustrated with reference to the use of reservoir offsets where the standard calibration approach is based on a different model to that which the practitioners clearly believe is being applied. This sort of error can give rise to significantly erroneous calibration results. (author). 12 refs., 1 fig
Directory of Open Access Journals (Sweden)
Sezar Gülbaz
2015-01-01
Full Text Available The land development and increase in urbanization in a watershed affect water quantityand water quality. On one hand, urbanization provokes the adjustment of geomorphicstructure of the streams, ultimately raises peak flow rate which causes flood; on theother hand, it diminishes water quality which results in an increase in Total SuspendedSolid (TSS. Consequently, sediment accumulation in downstream of urban areas isobserved which is not preferred for longer life of dams. In order to overcome thesediment accumulation problem in dams, the amount of TSS in streams and inwatersheds should be taken under control. Low Impact Development (LID is a BestManagement Practice (BMP which may be used for this purpose. It is a land planningand engineering design method which is applied in managing storm water runoff inorder to reduce flooding as well as simultaneously improve water quality. LID includestechniques to predict suspended solid loads in surface runoff generated over imperviousurban surfaces. In this study, the impact of LID-BMPs on surface runoff and TSS isinvestigated by employing a calibrated hydrodynamic model for Sazlidere Watershedwhich is located in Istanbul, Turkey. For this purpose, a calibrated hydrodynamicmodel was developed by using Environmental Protection Agency Storm WaterManagement Model (EPA SWMM. For model calibration and validation, we set up arain gauge and a flow meter into the field and obtain rainfall and flow rate data. Andthen, we select several LID types such as retention basins, vegetative swales andpermeable pavement and we obtain their influence on peak flow rate and pollutantbuildup and washoff for TSS. Consequently, we observe the possible effects ofLID on surface runoff and TSS in Sazlidere Watershed.
SURF Model Calibration Strategy
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-10
SURF and SURFplus are high explosive reactive burn models for shock initiation and propagation of detonation waves. They are engineering models motivated by the ignition & growth concept of high spots and for SURFplus a second slow reaction for the energy release from carbon clustering. A key feature of the SURF model is that there is a partial decoupling between model parameters and detonation properties. This enables reduced sets of independent parameters to be calibrated sequentially for the initiation and propagation regimes. Here we focus on a methodology for tting the initiation parameters to Pop plot data based on 1-D simulations to compute a numerical Pop plot. In addition, the strategy for tting the remaining parameters for the propagation regime and failure diameter is discussed.
Stochastic Volatility and DSGE Models
DEFF Research Database (Denmark)
Andreasen, Martin Møller
This paper argues that a specification of stochastic volatility commonly used to analyze the Great Moderation in DSGE models may not be appropriate, because the level of a process with this specification does not have conditional or unconditional moments. This is unfortunate because agents may...
Stochastic-field cavitation model
International Nuclear Information System (INIS)
Dumond, J.; Magagnato, F.; Class, A.
2013-01-01
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations
Stochastic-field cavitation model
Dumond, J.; Magagnato, F.; Class, A.
2013-07-01
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
A Fractionally Integrated Wishart Stochastic Volatility Model
M. Asai (Manabu); M.J. McAleer (Michael)
2013-01-01
textabstractThere has recently been growing interest in modeling and estimating alternative continuous time multivariate stochastic volatility models. We propose a continuous time fractionally integrated Wishart stochastic volatility (FIWSV) process. We derive the conditional Laplace transform of
Transport properties of stochastic Lorentz models
Beijeren, H. van
Diffusion processes are considered for one-dimensional stochastic Lorentz models, consisting of randomly distributed fixed scatterers and one moving light particle. In waiting time Lorentz models the light particle makes instantaneous jumps between scatterers after a stochastically distributed
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
Stochastic diffusion models for substitutable technological innovations
Wang, L.; Hu, B.; Yu, X.
2004-01-01
Based on the analysis of firms' stochastic adoption behaviour, this paper first points out the necessity to build more practical stochastic models. And then, stochastic evolutionary models are built for substitutable innovation diffusion system. Finally, through the computer simulation of the
Research on nonlinear stochastic dynamical price model
International Nuclear Information System (INIS)
Li Jiaorui; Xu Wei; Xie Wenxian; Ren Zhengzheng
2008-01-01
In consideration of many uncertain factors existing in economic system, nonlinear stochastic dynamical price model which is subjected to Gaussian white noise excitation is proposed based on deterministic model. One-dimensional averaged Ito stochastic differential equation for the model is derived by using the stochastic averaging method, and applied to investigate the stability of the trivial solution and the first-passage failure of the stochastic price model. The stochastic price model and the methods presented in this paper are verified by numerical studies
Stochastic models for atmospheric dispersion
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2003-01-01
Simple stochastic differential equation models have been applied by several researchers to describe the dispersion of tracer particles in the planetary atmospheric boundary layer and to form the basis for computer simulations of particle paths. To obtain the drift coefficient, empirical vertical...... positions close to the boundaries. Different rules have been suggested in the literature with justifications based on simulation studies. Herein the relevant stochastic differential equation model is formulated in a particular way. The formulation is based on the marginal transformation of the position...... velocity distributions that depend on height above the ground both with respect to standard deviation and skewness are substituted into the stationary Fokker/Planck equation. The particle position distribution is taken to be uniform *the well/mixed condition( and also a given dispersion coefficient...
Stochastic modeling and analysis of telecoms networks
Decreusefond, Laurent
2012-01-01
This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
Stochastic Subspace Modelling of Turbulence
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Pedersen, B. J.; Nielsen, Søren R.K.
2009-01-01
positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order......, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.......Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper...
Stochastic models, estimation, and control
Maybeck, Peter S
1982-01-01
This volume builds upon the foundations set in Volumes 1 and 2. Chapter 13 introduces the basic concepts of stochastic control and dynamic programming as the fundamental means of synthesizing optimal stochastic control laws.
Sequential neural models with stochastic layers
DEFF Research Database (Denmark)
Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich
2016-01-01
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...
A stochastic model of hormesis
International Nuclear Information System (INIS)
Yakovlev, A.Yu.; Tsodikov, A.D.; Bass, L.
1993-01-01
In order to describe the life-prolonging effect of some agents that are harmful at higher doses, ionizing radiations in particular, a stochastic model is developed in terms of accumulation and progression of intracellular lesions caused by the environment and by the agent itself. The processes of lesion repair, operating at the molecular and cellular level, are assumed to be responsible for this hormesis effect within the framework of the proposed model. Properties of lifetime distributions, derived for analysis of animal experiments with prolonged and acute irradiation, are given special attention. The model provides efficient means of interpreting experimental findings, as evidenced by its application to analysis of some published data on the hormetic effects of prolonged irradiation and of procaine on animal longevity. 51 refs., 2 figs., 1 tabs
Stochastic models for tumoral growth
Escudero, Carlos
2006-02-01
Strong experimental evidence has indicated that tumor growth belongs to the molecular beam epitaxy universality class. This type of growth is characterized by the constraint of cell proliferation to the tumor border and the surface diffusion of cells at the growing edge. Tumor growth is thus conceived as a competition for space between the tumor and the host, and cell diffusion at the tumor border is an optimal strategy adopted for minimizing the pressure and helping tumor development. Two stochastic partial differential equations are reported in this paper in order to correctly model the physical properties of tumoral growth in (1+1) and (2+1) dimensions. The advantage of these models is that they reproduce the correct geometry of the tumor and are defined in terms of polar variables. An analysis of these models allows us to quantitatively estimate the response of the tumor to an unfavorable perturbation during growth.
Stochastic hyperfine interactions modeling library
Zacate, Matthew O.; Evenson, William E.
2011-04-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When
CAM Stochastic Volatility Model for Option Pricing
Directory of Open Access Journals (Sweden)
Wanwan Huang
2016-01-01
Full Text Available The coupled additive and multiplicative (CAM noises model is a stochastic volatility model for derivative pricing. Unlike the other stochastic volatility models in the literature, the CAM model uses two Brownian motions, one multiplicative and one additive, to model the volatility process. We provide empirical evidence that suggests a nontrivial relationship between the kurtosis and skewness of asset prices and that the CAM model is able to capture this relationship, whereas the traditional stochastic volatility models cannot. We introduce a control variate method and Monte Carlo estimators for some of the sensitivities (Greeks of the model. We also derive an approximation for the characteristic function of the model.
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...
Stochastic modeling of soil salinity
Suweis, S.; Porporato, A. M.; Daly, E.; van der Zee, S.; Maritan, A.; Rinaldo, A.
2010-12-01
A minimalist stochastic model of primary soil salinity is proposed, in which the rate of soil salinization is determined by the balance between dry and wet salt deposition and the intermittent leaching events caused by rainfall events. The equations for the probability density functions of salt mass and concentration are found by reducing the coupled soil moisture and salt mass balance equations to a single stochastic differential equation (generalized Langevin equation) driven by multiplicative Poisson noise. Generalized Langevin equations with multiplicative white Poisson noise pose the usual Ito (I) or Stratonovich (S) prescription dilemma. Different interpretations lead to different results and then choosing between the I and S prescriptions is crucial to describe correctly the dynamics of the model systems. We show how this choice can be determined by physical information about the timescales involved in the process. We also show that when the multiplicative noise is at most linear in the random variable one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We then apply these results to the generalized Langevin equation that drives the salt mass dynamics. The stationary analytical solutions for the probability density functions of salt mass and concentration provide insight on the interplay of the main soil, plant and climate parameters responsible for long term soil salinization. In particular, they show the existence of two distinct regimes, one where the mean salt mass remains nearly constant (or decreases) with increasing rainfall frequency, and another where mean salt content increases markedly with increasing rainfall frequency. As a result, relatively small reductions of rainfall in drier climates may entail dramatic shifts in longterm soil salinization trends, with significant consequences, e.g. for climate change impacts on rain fed agriculture.
Stochastic quantization for the axial model
International Nuclear Information System (INIS)
Farina, C.; Montani, H.; Albuquerque, L.C.
1991-01-01
We use bosonization ideas to solve the axial model in the stochastic quantization framework. We obtain the fermion propagator of the theory decoupling directly the Langevin equation, instead of the Fokker-Planck equation. In the Appendix we calculate explicitly the anomalous divergence of the axial-vector current by using a regularization that does not break the Markovian character of the stochastic process
Stochastic models of intracellular transport
Bressloff, Paul C.
2013-01-09
The interior of a living cell is a crowded, heterogenuous, fluctuating environment. Hence, a major challenge in modeling intracellular transport is to analyze stochastic processes within complex environments. Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion and motor-driven active transport. Diffusive transport can be formulated in terms of the motion of an overdamped Brownian particle. On the other hand, active transport requires chemical energy, usually in the form of adenosine triphosphate hydrolysis, and can be direction specific, allowing biomolecules to be transported long distances; this is particularly important in neurons due to their complex geometry. In this review a wide range of analytical methods and models of intracellular transport is presented. In the case of diffusive transport, narrow escape problems, diffusion to a small target, confined and single-file diffusion, homogenization theory, and fractional diffusion are considered. In the case of active transport, Brownian ratchets, random walk models, exclusion processes, random intermittent search processes, quasi-steady-state reduction methods, and mean-field approximations are considered. Applications include receptor trafficking, axonal transport, membrane diffusion, nuclear transport, protein-DNA interactions, virus trafficking, and the self-organization of subcellular structures. © 2013 American Physical Society.
STOCHASTIC CHARACTERISTICS AND MODELING OF RELATIVE ...
African Journals Online (AJOL)
Test
Results are highly accurate and promising for all models based on Lewis' criteria. ... hydrological cycle. Future increases in ... STOCHASTIC CHARACTERISTICS AND MODELING OF RELATIVE HUMIDITY OF OGUN BASIN, NIGERIA. 71 ...
Towards Model Checking Stochastic Process Algebra
Hermanns, H.; Grieskamp, W.; Santen, T.; Katoen, Joost P.; Stoddart, B.; Meyer-Kayser, J.; Siegle, M.
2000-01-01
Stochastic process algebras have been proven useful because they allow behaviour-oriented performance and reliability modelling. As opposed to traditional performance modelling techniques, the behaviour- oriented style supports composition and abstraction in a natural way. However, analysis of
Stochastic biomathematical models with applications to neuronal modeling
Batzel, Jerry; Ditlevsen, Susanne
2013-01-01
Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Modeling and analysis of stochastic systems
Kulkarni, Vidyadhar G
2011-01-01
Based on the author's more than 25 years of teaching experience, Modeling and Analysis of Stochastic Systems, Second Edition covers the most important classes of stochastic processes used in the modeling of diverse systems, from supply chains and inventory systems to genetics and biological systems. For each class of stochastic process, the text includes its definition, characterization, applications, transient and limiting behavior, first passage times, and cost/reward models. Along with reorganizing the material, this edition revises and adds new exercises and examples. New to the second edi
Dynamic-stochastic modeling of snow cover formation on the European territory of Russia
A. N. Gelfan; V. M. Moreido
2014-01-01
A dynamic-stochastic model, which combines a deterministic model of snow cover formation with a stochastic weather generator, has been developed. The deterministic snow model describes temporal change of the snow depth, content of ice and liquid water, snow density, snowmelt, sublimation, re-freezing of melt water, and snow metamorphism. The model has been calibrated and validated against the long-term data of snow measurements over the territory of the European Russia. The model showed good ...
Modelling and application of stochastic processes
1986-01-01
The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...
Numerical Simulation of the Heston Model under Stochastic Correlation
Directory of Open Access Journals (Sweden)
Long Teng
2017-12-01
Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.
Scalable inference for stochastic block models
Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.
2017-01-01
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference
Stochastic interest rates model in compounding | Galadima ...
African Journals Online (AJOL)
Stochastic interest rates model in compounding. ... in finance, real estate, insurance, accounting and other areas of business administration. The assumption that future rates are fixed and known with certainty at the beginning of an investment, ...
Moment Closure for the Stochastic Logistic Model
National Research Council Canada - National Science Library
Singh, Abhyudai; Hespanha, Joao P
2006-01-01
..., which we refer to as the moment closure function. In this paper, a systematic procedure for constructing moment closure functions of arbitrary order is presented for the stochastic logistic model...
A stochastic model of enzyme kinetics
Stefanini, Marianne; Newman, Timothy; McKane, Alan
2003-10-01
Enzyme kinetics is generally modeled by deterministic rate equations, and in the simplest case leads to the well-known Michaelis-Menten equation. It is plausible that stochastic effects will play an important role at low enzyme concentrations. We have addressed this by constructing a simple stochastic model which can be exactly solved in the steady-state. Throughout a wide range of parameter values Michaelis-Menten dynamics is replaced by a new and simple theoretical result.
Modelling Cow Behaviour Using Stochastic Automata
DEFF Research Database (Denmark)
Jónsson, Ragnar Ingi
This report covers an initial study on the modelling of cow behaviour using stochastic automata with the aim of detecting lameness. Lameness in cows is a serious problem that needs to be dealt with because it results in less profitable production units and in reduced quality of life...... for the affected livestock. By featuring training data consisting of measurements of cow activity, three different models are obtained, namely an autonomous stochastic automaton, a stochastic automaton with coinciding state and output and an autonomous stochastic automaton with coinciding state and output, all...... of which describe the cows' activity in the two regarded behavioural scenarios, non-lame and lame. Using the experimental measurement data the different behavioural relations for the two regarded behavioural scenarios are assessed. The three models comprise activity within last hour, activity within last...
A stochastic SIS epidemic model with vaccination
Cao, Boqiang; Shan, Meijing; Zhang, Qimin; Wang, Weiming
2017-11-01
In this paper, we investigate the basic features of an SIS type infectious disease model with varying population size and vaccinations in presence of environment noise. By applying the Markov semigroup theory, we propose a stochastic reproduction number R0s which can be seen as a threshold parameter to utilize in identifying the stochastic extinction and persistence: If R0s disease-free absorbing set for the stochastic epidemic model, which implies that disease dies out with probability one; while if R0s > 1, under some mild extra conditions, the SDE model has an endemic stationary distribution which results in the stochastic persistence of the infectious disease. The most interesting finding is that large environmental noise can suppress the outbreak of the disease.
Dynamics of a Stochastic Intraguild Predation Model
Directory of Open Access Journals (Sweden)
Zejing Xing
2016-04-01
Full Text Available Intraguild predation (IGP is a widespread ecological phenomenon which occurs when one predator species attacks another predator species with which it competes for a shared prey species. The objective of this paper is to study the dynamical properties of a stochastic intraguild predation model. We analyze stochastic persistence and extinction of the stochastic IGP model containing five cases and establish the sufficient criteria for global asymptotic stability of the positive solutions. This study shows that it is possible for the coexistence of three species under the influence of environmental noise, and that the noise may have a positive effect for IGP species. A stationary distribution of the stochastic IGP model is established and it has the ergodic property, suggesting that the time average of population size with the development of time is equal to the stationary distribution in space. Finally, we show that our results may be extended to two well-known biological systems: food chains and exploitative competition.
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Model Calibration in Watershed Hydrology
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
Stochastic Wake Modelling Based on POD Analysis
Directory of Open Access Journals (Sweden)
David Bastine
2018-03-01
Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.
Stochastic differential equations used to model conjugation
DEFF Research Database (Denmark)
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo
Stochastic differential equations (SDEs) are used to model horizontal transfer of antibiotic resis- tance by conjugation. The model describes the concentration of donor, recipient, transconjugants and substrate. The strength of the SDE model over the traditional ODE models is that the noise can...
Stochastic Modeling of Traffic Air Pollution
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2014-01-01
In this paper, modeling of traffic air pollution is discussed with special reference to infrastructures. A number of subjects related to health effects of air pollution and the different types of pollutants are briefly presented. A simple model for estimating the social cost of traffic related air...... and using simple Monte Carlo techniques to obtain a stochastic estimate of the costs of traffic air pollution for infrastructures....... pollution is derived. Several authors have published papers on this very complicated subject, but no stochastic modelling procedure have obtained general acceptance. The subject is discussed basis of a deterministic model. However, it is straightforward to modify this model to include uncertain parameters...
Stochastic differential equation model to Prendiville processes
Energy Technology Data Exchange (ETDEWEB)
Granita, E-mail: granitafc@gmail.com [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); Bahar, Arifah [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); UTM Center for Industrial & Applied Mathematics (UTM-CIAM) (Malaysia)
2015-10-22
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.
Stochastic differential equation model to Prendiville processes
International Nuclear Information System (INIS)
Granita; Bahar, Arifah
2015-01-01
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution
Infinite-degree-corrected stochastic block model
DEFF Research Database (Denmark)
Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten
2014-01-01
In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...
From complex to simple: interdisciplinary stochastic models
International Nuclear Information System (INIS)
Mazilu, D A; Zamora, G; Mazilu, I
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)
Stochastic volatility models and Kelvin waves
Energy Technology Data Exchange (ETDEWEB)
Lipton, Alex [Merrill Lynch, Mlfc Main, 2 King Edward Street, London EC1A 1HQ (United Kingdom); Sepp, Artur [Merrill Lynch, 4 World Financial Center, New York, NY 10080 (United States)], E-mail: Alex_Lipton@ml.com, E-mail: Artur_Sepp@ml.com
2008-08-29
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
Stochastic volatility models and Kelvin waves
Lipton, Alex; Sepp, Artur
2008-08-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
Stochastic volatility models and Kelvin waves
International Nuclear Information System (INIS)
Lipton, Alex; Sepp, Artur
2008-01-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics
Stochastic Modeling Of Wind Turbine Drivetrain Components
DEFF Research Database (Denmark)
Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard
2014-01-01
reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...
Weather Derivatives and Stochastic Modelling of Temperature
Directory of Open Access Journals (Sweden)
Fred Espen Benth
2011-01-01
Full Text Available We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.
Stochastic Modelling Of The Repairable System
Directory of Open Access Journals (Sweden)
Andrzejczak Karol
2015-11-01
Full Text Available All reliability models consisting of random time factors form stochastic processes. In this paper we recall the definitions of the most common point processes which are used for modelling of repairable systems. Particularly this paper presents stochastic processes as examples of reliability systems for the support of the maintenance related decisions. We consider the simplest one-unit system with a negligible repair or replacement time, i.e., the unit is operating and is repaired or replaced at failure, where the time required for repair and replacement is negligible. When the repair or replacement is completed, the unit becomes as good as new and resumes operation. The stochastic modelling of recoverable systems constitutes an excellent method of supporting maintenance related decision-making processes and enables their more rational use.
Compositional Modelling of Stochastic Hybrid Systems
Strubbe, S.N.
2005-01-01
In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete
Some recent developments in stochastic volatility modelling
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Nicolato, Elisa; Shephard, N.
2002-01-01
This paper reviews and puts in context some of our recent work on stochastic volatility (SV) modelling for financial economics. Here our main focus is on: (i) the relationship between subordination and SV, (ii) OU based volatility models, (iii) exact option pricing, (iv) realized power variation...
Stochastic models for turbulent reacting flows
Energy Technology Data Exchange (ETDEWEB)
Kerstein, A. [Sandia National Laboratories, Livermore, CA (United States)
1993-12-01
The goal of this program is to develop and apply stochastic models of various processes occurring within turbulent reacting flows in order to identify the fundamental mechanisms governing these flows, to support experimental studies of these flows, and to further the development of comprehensive turbulent reacting flow models.
Predicting Footbridge Response using Stochastic Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...
A Stochastic Model for Malaria Transmission Dynamics
Directory of Open Access Journals (Sweden)
Rachel Waema Mbogo
2018-01-01
Full Text Available Malaria is one of the three most dangerous infectious diseases worldwide (along with HIV/AIDS and tuberculosis. In this paper we compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in malaria transmission dynamics. Relationships between the basic reproduction number for malaria transmission dynamics between humans and mosquitoes and the extinction thresholds of corresponding continuous-time Markov chain models are derived under certain assumptions. The stochastic model is formulated using the continuous-time discrete state Galton-Watson branching process (CTDSGWbp. The reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or die out. Thresholds for disease extinction from stochastic models contribute crucial knowledge on disease control and elimination and mitigation of infectious diseases. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that malaria outbreak is more likely if the disease is introduced by infected mosquitoes as opposed to infected humans. These insights demonstrate the importance of a policy or intervention focusing on controlling the infected mosquito population if the control of malaria is to be realized.
Modeling stochasticity in biochemical reaction networks
International Nuclear Information System (INIS)
Constantino, P H; Vlysidis, M; Smadbeck, P; Kaznessis, Y N
2016-01-01
Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts. (topical review)
Stochastic resonance in models of neuronal ensembles
International Nuclear Information System (INIS)
Chialvo, D.R.; Longtin, A.; Mueller-Gerkin, J.
1997-01-01
Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the open-quotes resonance curveclose quotes in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. copyright 1997 The American Physical Society
Stochastic linear programming models, theory, and computation
Kall, Peter
2011-01-01
This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...
Stochastic dynamical models for ecological regime shifts
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Carstensen, Jacob; Madsen, Henrik
the physical and biological knowledge of the system, and nonlinearities introduced here can generate regime shifts or enhance the probability of regime shifts in the case of stochastic models, typically characterized by a threshold value for the known driver. A simple model for light competition between...... definition and stability of regimes become less subtle. Ecological regime shifts and their modeling must be viewed in a probabilistic manner, particularly if such model results are to be used in ecosystem management....
Modeling animal movements using stochastic differential equations
Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie
2004-01-01
We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...
Stochastic Load Models and Footbridge Response
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2015-01-01
Pedestrians may cause vibrations in footbridges and these vibrations may potentially be annoying. This calls for predictions of footbridge vibration levels and the paper considers a stochastic approach to modeling the action of pedestrians assuming walking parameters such as step frequency, pedes...
Stochastic Growth Models with No Discounting
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2007-01-01
Roč. 15, č. 4 (2007), s. 88-98 ISSN 0572-3043 R&D Projects: GA ČR(CZ) GA402/06/0990; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : economic dynamics * stochastic version of the Ramsey growth model * Markov decision processes Subject RIV: AH - Economics
Stochastic models in reliability and maintenance
2002-01-01
Our daily lives can be maintained by the high-technology systems. Computer systems are typical examples of such systems. We can enjoy our modern lives by using many computer systems. Much more importantly, we have to maintain such systems without failure, but cannot predict when such systems will fail and how to fix such systems without delay. A stochastic process is a set of outcomes of a random experiment indexed by time, and is one of the key tools needed to analyze the future behavior quantitatively. Reliability and maintainability technologies are of great interest and importance to the maintenance of such systems. Many mathematical models have been and will be proposed to describe reliability and maintainability systems by using the stochastic processes. The theme of this book is "Stochastic Models in Reliability and Main tainability. " This book consists of 12 chapters on the theme above from the different viewpoints of stochastic modeling. Chapter 1 is devoted to "Renewal Processes," under which cla...
News Impact Curve for Stochastic Volatility Models
Makoto Takahashi; Yasuhiro Omori; Toshiaki Watanabe
2012-01-01
This paper proposes a new method to compute the news impact curve for stochastic volatility (SV) models. The new method incorporates the joint movement of return and volatility, which has been ignored by the extant literature, by simply adding a couple of steps to the Bayesian MCMC estimation procedures for SV models. This simple procedure is versatile and applicable to various SV type models. Contrary to the monotonic news impact functions in the extant literature, the new method gives a U-s...
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
A Simple, Realistic Stochastic Model of Gastric Emptying.
Directory of Open Access Journals (Sweden)
Jiraphat Yokrattanasak
Full Text Available Several models of Gastric Emptying (GE have been employed in the past to represent the rate of delivery of stomach contents to the duodenum and jejunum. These models have all used a deterministic form (algebraic equations or ordinary differential equations, considering GE as a continuous, smooth process in time. However, GE is known to occur as a sequence of spurts, irregular both in size and in timing. Hence, we formulate a simple stochastic process model, able to represent the irregular decrements of gastric contents after a meal. The model is calibrated on existing literature data and provides consistent predictions of the observed variability in the emptying trajectories. This approach may be useful in metabolic modeling, since it describes well and explains the apparently heterogeneous GE experimental results in situations where common gastric mechanics across subjects would be expected.
Stochastic model of radioiodine transport
International Nuclear Information System (INIS)
Schwarz, G.; Hoffman, F.O.
1980-01-01
A research project has been underway at the Oak Ridge National Laboratory with the objective to evaluate dose assessment models and to determine the uncertainty associated with the model predictions. This has resulted in the application of methods to propagate uncertainties through models. Some techniques and results related to this problem are discussed
Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.
2016-12-01
One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.
Stochastic Spectral Descent for Discrete Graphical Models
International Nuclear Information System (INIS)
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; Carin, Lawrence; Cevher, Volkan
2015-01-01
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted as gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.
Brain-inspired Stochastic Models and Implementations
Al-Shedivat, Maruan
2015-05-12
One of the approaches to building artificial intelligence (AI) is to decipher the princi- ples of the brain function and to employ similar mechanisms for solving cognitive tasks, such as visual perception or natural language understanding, using machines. The recent breakthrough, named deep learning, demonstrated that large multi-layer networks of arti- ficial neural-like computing units attain remarkable performance on some of these tasks. Nevertheless, such artificial networks remain to be very loosely inspired by the brain, which rich structures and mechanisms may further suggest new algorithms or even new paradigms of computation. In this thesis, we explore brain-inspired probabilistic mechanisms, such as neural and synaptic stochasticity, in the context of generative models. The two questions we ask here are: (i) what kind of models can describe a neural learning system built of stochastic components? and (ii) how can we implement such systems e ̆ciently? To give specific answers, we consider two well known models and the corresponding neural architectures: the Naive Bayes model implemented with a winner-take-all spiking neural network and the Boltzmann machine implemented in a spiking or non-spiking fashion. We propose and analyze an e ̆cient neuromorphic implementation of the stochastic neu- ral firing mechanism and study the e ̄ects of synaptic unreliability on learning generative energy-based models implemented with neural networks.
Stochastic Modelling of Energy Systems
DEFF Research Database (Denmark)
Andersen, Klaus Kaae
2001-01-01
is that the model structure has to be adequate for practical applications, such as system simulation, fault detection and diagnosis, and design of control strategies. This also reflects on the methods used for identification of the component models. The main result from this research is the identification......In this thesis dynamic models of typical components in Danish heating systems are considered. Emphasis is made on describing and evaluating mathematical methods for identification of such models, and on presentation of component models for practical applications. The thesis consists of seven...... research papers (case studies) together with a summary report. Each case study takes it's starting point in typical heating system components and both, the applied mathematical modelling methods and the application aspects, are considered. The summary report gives an introduction to the scope...
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Solvable stochastic dealer models for financial markets
Yamada, Kenta; Takayasu, Hideki; Ito, Takatoshi; Takayasu, Misako
2009-05-01
We introduce solvable stochastic dealer models, which can reproduce basic empirical laws of financial markets such as the power law of price change. Starting from the simplest model that is almost equivalent to a Poisson random noise generator, the model becomes fairly realistic by adding only two effects: the self-modulation of transaction intervals and a forecasting tendency, which uses a moving average of the latest market price changes. Based on the present microscopic model of markets, we find a quantitative relation with market potential forces, which have recently been discovered in the study of market price modeling based on random walks.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
SR 97. Alternative models project. Stochastic continuum modelling of Aberg
International Nuclear Information System (INIS)
Widen, H.; Walker, D.
1999-08-01
As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modelling approaches to bedrock performance assessment for a single hypothetical repository, arbitrarily named Aberg. The Aberg repository will adopt input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The models are restricted to an explicit domain, boundary conditions and canister location to facilitate the comparison. The boundary conditions are based on the regional groundwater model provided in digital format. This study is the application of HYDRASTAR, a stochastic continuum groundwater flow and transport-modelling program. The study uses 34 realisations of 945 canister locations in the hypothetical repository to evaluate the uncertainty of the advective travel time, canister flux (Darcy velocity at a canister) and F-ratio. Several comparisons of variability are constructed between individual canister locations and individual realisations. For the ensemble of all realisations with all canister locations, the study found a median travel time of 27 years, a median canister flux of 7.1 x 10 -4 m/yr and a median F-ratio of 3.3 x 10 5 yr/m. The overall pattern of regional flow is preserved in the site-scale model, as is reflected in flow paths and exit locations. The site-scale model slightly over-predicts the boundary fluxes from the single realisation of the regional model. The explicitly prescribed domain was seen to be slightly restrictive, with 6% of the stream tubes failing to exit the upper surface of the model. Sensitivity analysis and calibration are suggested as possible extensions of the modelling study
Fuzzy Stochastic Optimization Theory, Models and Applications
Wang, Shuming
2012-01-01
Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies. The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...
A two-factor, stochastic programming model of Danish mortgage-backed securities
DEFF Research Database (Denmark)
Nielsen, Søren S.; Poulsen, Rolf
2004-01-01
-trivial, both in terms of deciding on an initial mortgage, and in terms of managing (rebalancing) it optimally.We propose a two-factor, arbitrage-free interest-rate model, calibrated to observable security prices, and implement on top of it a multi-stage, stochastic optimization program with the purpose...
Stochastic models for time series
Doukhan, Paul
2018-01-01
This book presents essential tools for modelling non-linear time series. The first part of the book describes the main standard tools of probability and statistics that directly apply to the time series context to obtain a wide range of modelling possibilities. Functional estimation and bootstrap are discussed, and stationarity is reviewed. The second part describes a number of tools from Gaussian chaos and proposes a tour of linear time series models. It goes on to address nonlinearity from polynomial or chaotic models for which explicit expansions are available, then turns to Markov and non-Markov linear models and discusses Bernoulli shifts time series models. Finally, the volume focuses on the limit theory, starting with the ergodic theorem, which is seen as the first step for statistics of time series. It defines the distributional range to obtain generic tools for limit theory under long or short-range dependences (LRD/SRD) and explains examples of LRD behaviours. More general techniques (central limit ...
A stochastic model for quantum measurement
International Nuclear Information System (INIS)
Budiyono, Agung
2013-01-01
We develop a statistical model of microscopic stochastic deviation from classical mechanics based on a stochastic process with a transition probability that is assumed to be given by an exponential distribution of infinitesimal stationary action. We apply the statistical model to stochastically modify a classical mechanical model for the measurement of physical quantities reproducing the prediction of quantum mechanics. The system+apparatus always has a definite configuration at all times, as in classical mechanics, fluctuating randomly following a continuous trajectory. On the other hand, the wavefunction and quantum mechanical Hermitian operator corresponding to the physical quantity arise formally as artificial mathematical constructs. During a single measurement, the wavefunction of the whole system+apparatus evolves according to a Schrödinger equation and the configuration of the apparatus acts as the pointer of the measurement so that there is no wavefunction collapse. We will also show that while the outcome of each single measurement event does not reveal the actual value of the physical quantity prior to measurement, its average in an ensemble of identical measurements is equal to the average of the actual value of the physical quantity prior to measurement over the distribution of the configuration of the system. (paper)
Field Trial Measurements to Validate a Stochastic Aircraft Boarding Model
Directory of Open Access Journals (Sweden)
Michael Schultz
2018-03-01
Full Text Available Efficient boarding procedures have to consider both operational constraints and the individual passenger behavior. In contrast to the aircraft handling processes of fueling, catering and cleaning, the boarding process is more driven by passengers than by airport or airline operators. This paper delivers a comprehensive set of operational data including classification of boarding times, passenger arrival times, times to store hand luggage, and passenger interactions in the aircraft cabin as a reliable basis for calibrating models for aircraft boarding. In this paper, a microscopic approach is used to model the passenger behavior, where the passenger movement is defined as a one-dimensional, stochastic, and time/space discrete transition process. This model is used to compare measurements from field trials of boarding procedures with simulation results and demonstrates a deviation smaller than 5%.
Stochastic Modeling of Past Volcanic Crises
Woo, Gordon
2018-01-01
The statistical foundation of disaster risk analysis is past experience. From a scientific perspective, history is just one realization of what might have happened, given the randomness and chaotic dynamics of Nature. Stochastic analysis of the past is an exploratory exercise in counterfactual history, considering alternative possible scenarios. In particular, the dynamic perturbations that might have transitioned a volcano from an unrest to an eruptive state need to be considered. The stochastic modeling of past volcanic crises leads to estimates of eruption probability that can illuminate historical volcanic crisis decisions. It can also inform future economic risk management decisions in regions where there has been some volcanic unrest, but no actual eruption for at least hundreds of years. Furthermore, the availability of a library of past eruption probabilities would provide benchmark support for estimates of eruption probability in future volcanic crises.
A stochastic model for the financial market with discontinuous prices
Directory of Open Access Journals (Sweden)
Leda D. Minkova
1996-01-01
Full Text Available This paper models some situations occurring in the financial market. The asset prices evolve according to a stochastic integral equation driven by a Gaussian martingale. A portfolio process is constrained in such a way that the wealth process covers some obligation. A solution to a linear stochastic integral equation is obtained in a class of cadlag stochastic processes.
Hidden Symmetries of Stochastic Models
Directory of Open Access Journals (Sweden)
Boyka Aneva
2007-05-01
Full Text Available In the matrix product states approach to $n$ species diffusion processes the stationary probability distribution is expressed as a matrix product state with respect to a quadratic algebra determined by the dynamics of the process. The quadratic algebra defines a noncommutative space with a $SU_q(n$ quantum group action as its symmetry. Boundary processes amount to the appearance of parameter dependent linear terms in the algebraic relations and lead to a reduction of the $SU_q(n$ symmetry. We argue that the boundary operators of the asymmetric simple exclusion process generate a tridiagonal algebra whose irriducible representations are expressed in terms of the Askey-Wilson polynomials. The Askey-Wilson algebra arises as a symmetry of the boundary problem and allows to solve the model exactly.
Generation of a stochastic precipitation model for the tropical climate
Ng, Jing Lin; Abd Aziz, Samsuzana; Huang, Yuk Feng; Wayayok, Aimrun; Rowshon, MK
2017-06-01
A tropical country like Malaysia is characterized by intense localized precipitation with temperatures remaining relatively constant throughout the year. A stochastic modeling of precipitation in the flood-prone Kelantan River Basin is particularly challenging due to the high intermittency of precipitation events of the northeast monsoons. There is an urgent need to have long series of precipitation in modeling the hydrological responses. A single-site stochastic precipitation model that includes precipitation occurrence and an intensity model was developed, calibrated, and validated for the Kelantan River Basin. The simulation process was carried out separately for each station without considering the spatial correlation of precipitation. The Markov chains up to the fifth-order and six distributions were considered. The daily precipitation data of 17 rainfall stations for the study period of 1954-2013 were selected. The results suggested that second- and third-order Markov chains were suitable for simulating monthly and yearly precipitation occurrences, respectively. The fifth-order Markov chain resulted in overestimation of precipitation occurrences. For the mean, distribution, and standard deviation of precipitation amounts, the exponential, gamma, log-normal, skew normal, mixed exponential, and generalized Pareto distributions performed superiorly. However, for the extremes of precipitation, the exponential and log-normal distributions were better while the skew normal and generalized Pareto distributions tend to show underestimations. The log-normal distribution was chosen as the best distribution to simulate precipitation amounts. Overall, the stochastic precipitation model developed is considered a convenient tool to simulate the characteristics of precipitation in the Kelantan River Basin.
Stochastic Optimization of Wind Turbine Power Factor Using Stochastic Model of Wind Power
DEFF Research Database (Denmark)
Chen, Peiyuan; Siano, Pierluigi; Bak-Jensen, Birgitte
2010-01-01
This paper proposes a stochastic optimization algorithm that aims to minimize the expectation of the system power losses by controlling wind turbine (WT) power factors. This objective of the optimization is subject to the probability constraints of bus voltage and line current requirements....... The optimization algorithm utilizes the stochastic models of wind power generation (WPG) and load demand to take into account their stochastic variation. The stochastic model of WPG is developed on the basis of a limited autoregressive integrated moving average (LARIMA) model by introducing a crosscorrelation...... structure to the LARIMA model. The proposed stochastic optimization is carried out on a 69-bus distribution system. Simulation results confirm that, under various combinations of WPG and load demand, the system power losses are considerably reduced with the optimal setting of WT power factor as compared...
Modeling stochastic frontier based on vine copulas
Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito
2017-11-01
This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Models of the stochastic activity of neurones
Holden, Arun Vivian
1976-01-01
These notes have grown from a series of seminars given at Leeds between 1972 and 1975. They represent an attempt to gather together the different kinds of model which have been proposed to account for the stochastic activity of neurones, and to provide an introduction to this area of mathematical biology. A striking feature of the electrical activity of the nervous system is that it appears stochastic: this is apparent at all levels of recording, ranging from intracellular recordings to the electroencephalogram. The chapters start with fluctuations in membrane potential, proceed through single unit and synaptic activity and end with the behaviour of large aggregates of neurones: L have chgaen this seque~~e\\/~~';uggest that the interesting behaviourr~f :the nervous system - its individuality, variability and dynamic forms - may in part result from the stochastic behaviour of its components. I would like to thank Dr. Julio Rubio for reading and commenting on the drafts, Mrs. Doris Beighton for producing the fin...
Discrete stochastic analogs of Erlang epidemic models.
Getz, Wayne M; Dougherty, Eric R
2018-12-01
Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.
Mathematical models of information and stochastic systems
Kornreich, Philipp
2008-01-01
From ancient soothsayers and astrologists to today's pollsters and economists, probability theory has long been used to predict the future on the basis of past and present knowledge. Mathematical Models of Information and Stochastic Systems shows that the amount of knowledge about a system plays an important role in the mathematical models used to foretell the future of the system. It explains how this known quantity of information is used to derive a system's probabilistic properties. After an introduction, the book presents several basic principles that are employed in the remainder of the t
Stochastic model of energetic nuclear reactor
International Nuclear Information System (INIS)
Bojko, R.V.; Ryazanov, V.V.
2002-01-01
Behaviour of nuclear reactor was treated using the theory of branching processes. As mathematical model descriptive the neutron number in time the Markov occasional process is proposed. Application of branching occasional processes with variable regime to the description of neutron behaviour in the reactor makes possible conducting strong description of critical operation regime and demonstrates the severity of the process. Three regimes of the critical behaviour depending on the sign of manipulated variables and feedbacks were discovered. Probability regularities peculiar to the behaviour of the reactor are embodied to the suggested stochastic model [ru
Predicting extinction rates in stochastic epidemic models
International Nuclear Information System (INIS)
Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra
2009-01-01
We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed
Study on individual stochastic model of GNSS observations for precise kinematic applications
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
Calibration of PMIS pavement performance prediction models.
2012-02-01
Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...
Modelling conjugation with stochastic differential equations
DEFF Research Database (Denmark)
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Hasman, Henrik
2010-01-01
Enterococcus faecium strains in a rich exhaustible media. The model contains a new expression for a substrate dependent conjugation rate. A maximum likelihood based method is used to estimate the model parameters. Different models including different noise structure for the system and observations are compared......Conjugation is an important mechanism involved in the transfer of resistance between bacteria. In this article a stochastic differential equation based model consisting of a continuous time state equation and a discrete time measurement equation is introduced to model growth and conjugation of two...... using a likelihood-ratio test and Akaike's information criterion. Experiments indicating conjugation on the agar plates selecting for transconjugants motivates the introduction of an extended model, for which conjugation on the agar plate is described in the measurement equation. This model is compared...
Explicit calibration and simulation of stochastic fields by low-order ARMA processes
DEFF Research Database (Denmark)
Krenk, Steen
2011-01-01
A simple framework for autoregressive simulation of stochastic fields is presented. The autoregressive format leads to a simple exponential correlation structure in the time-dimension. In the case of scalar processes a more detailed correlation structure can be obtained by adding memory...... to the process via an extension to autoregressive moving average (ARMA) processes. The ARMA format incorporates a more detailed correlation structure by including previous values of the simulated process. Alternatively, a more detailed correlation structure can be obtained by including additional 'state......-space' variables in the simulation. For a scalar process this would imply an increase of the dimension of the process to be simulated. In the case of a stochastic field the correlation in the time-dimension is represented, although indirectly, in the simultaneous spatial correlation. The model with the shortest...
Stochastic Model Checking of the Stochastic Quality Calculus
DEFF Research Database (Denmark)
Nielson, Flemming; Nielson, Hanne Riis; Zeng, Kebin
2015-01-01
The Quality Calculus uses quality binders for input to express strategies for continuing the computation even when the desired input has not been received. The Stochastic Quality Calculus adds generally distributed delays for output actions and real-time constraints on the quality binders for input....... This gives rise to Generalised Semi-Markov Decision Processes for which few analytical techniques are available. We restrict delays on output actions to be exponentially distributed while still admitting real-time constraints on the quality binders. This facilitates developing analytical techniques based...
International Nuclear Information System (INIS)
Eriksson, L.O.; Oppelstrup, J.
1994-12-01
A simulator for 2D stochastic continuum simulation and inverse modelling of groundwater flow has been developed. The simulator is well suited for method evaluation and what-if simulation and written in MATLAB. Conductivity fields are generated by unconditional simulation, conditional simulation on measured conductivities and calibration on both steady-state head measurements and transient head histories. The fields can also include fracture zones and zones with different mean conductivities. Statistics of conductivity fields and particle travel times are recorded in Monte-Carlo simulations. The calibration uses the pilot point technique, an inverse technique proposed by RamaRao and LaVenue. Several Kriging procedures are implemented, among others Kriging neighborhoods. In cases where the expectation of the log-conductivity in the truth field is known the nonbias conditions can be omitted, which will make the variance in the conditionally simulated conductivity fields smaller. A simulation experiment, resembling the initial stages of a site investigation and devised in collaboration with SKB, is performed and interpreted. The results obtained in the present study show less uncertainty than in our preceding study. This is mainly due to the modification of the Kriging procedure but also to the use of more data. Still the large uncertainty in cases of sparse data is apparent. The variogram represents essential characteristics of the conductivity field. Thus, even unconditional simulations take account of important information. Significant improvements in variance by further conditioning will be obtained only as the number of data becomes much larger. 16 refs, 26 figs
Energy Technology Data Exchange (ETDEWEB)
Eriksson, L O; Oppelstrup, J [Starprog AB (Sweden)
1994-12-01
A simulator for 2D stochastic continuum simulation and inverse modelling of groundwater flow has been developed. The simulator is well suited for method evaluation and what-if simulation and written in MATLAB. Conductivity fields are generated by unconditional simulation, conditional simulation on measured conductivities and calibration on both steady-state head measurements and transient head histories. The fields can also include fracture zones and zones with different mean conductivities. Statistics of conductivity fields and particle travel times are recorded in Monte-Carlo simulations. The calibration uses the pilot point technique, an inverse technique proposed by RamaRao and LaVenue. Several Kriging procedures are implemented, among others Kriging neighborhoods. In cases where the expectation of the log-conductivity in the truth field is known the nonbias conditions can be omitted, which will make the variance in the conditionally simulated conductivity fields smaller. A simulation experiment, resembling the initial stages of a site investigation and devised in collaboration with SKB, is performed and interpreted. The results obtained in the present study show less uncertainty than in our preceding study. This is mainly due to the modification of the Kriging procedure but also to the use of more data. Still the large uncertainty in cases of sparse data is apparent. The variogram represents essential characteristics of the conductivity field. Thus, even unconditional simulations take account of important information. Significant improvements in variance by further conditioning will be obtained only as the number of data becomes much larger. 16 refs, 26 figs.
Error-in-variables models in calibration
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Modelling the stochastic behaviour of primary nucleation.
Maggioni, Giovanni Maria; Mazzotti, Marco
2015-01-01
We study the stochastic nature of primary nucleation and how it manifests itself in a crystallisation process at different scales and under different operating conditions. Such characteristics of nucleation are evident in many experiments where detection times of crystals are not identical, despite identical experimental conditions, but instead are distributed around an average value. While abundant experimental evidence has been reported in the literature, a clear theoretical understanding and an appropriate modelling of this feature is still missing. In this contribution, we present two models describing a batch cooling crystallisation, where the interplay between stochastic nucleation and deterministic crystal growth is described differently in each. The nucleation and growth rates of the two models are estimated by a comprehensive set of measurements of paracetamol crystallisation from aqueous solution in a 1 mL vessel [Kadam et al., Chemical Engineering Science, 2012, 72, 10-19]. Both models are applied to the cooling crystallisation process above under different operating conditions, i.e. different volumes, initial concentrations, cooling rates. The advantages and disadvantages of the two approaches are illustrated and discussed, with particular reference to their use across scales of nucleation rate measured in very small crystallisers.
Characterizing economic trends by Bayesian stochastic model specifi cation search
Grassi, Stefano; Proietti, Tommaso
2010-01-01
We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate...
Scalable inference for stochastic block models
Peng, Chengbin
2017-12-08
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.
Intimate Partner Violence: A Stochastic Model.
Guidi, Elisa; Meringolo, Patrizia; Guazzini, Andrea; Bagnoli, Franco
2017-01-01
Intimate partner violence (IPV) has been a well-studied problem in the past psychological literature, especially through its classical methodology such as qualitative, quantitative and mixed methods. This article introduces two basic stochastic models as an alternative approach to simulate the short and long-term dynamics of a couple at risk of IPV. In both models, the members of the couple may assume a finite number of states, updating them in a probabilistic way at discrete time steps. After defining the transition probabilities, we first analyze the evolution of the couple in isolation and then we consider the case in which the individuals modify their behavior depending on the perceived violence from other couples in their environment or based on the perceived informal social support. While high perceived violence in other couples may converge toward the own presence of IPV by means a gender-specific transmission, the gender differences fade-out in the case of received informal social support. Despite the simplicity of the two stochastic models, they generate results which compare well with past experimental studies about IPV and they give important practical implications for prevention intervention in this field. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.
Regionalization of the Modified Bartlett-Lewis Rectangular Pulse Stochastic Rainfall Model
Dongkyun Kim; Francisco Olivera; Huidae Cho; Scott A. Socolofsky
2013-01-01
Parameters of the Modified Bartlett-Lewis Rectangular Pulse (MBLRP) stochastic rainfall simulation model were regionalized across the contiguous United States. Three thousand four hundred forty-four National Climate Data Center (NCDC) rain gauges were used to obtain spatial and seasonal patterns of the model parameters. The MBLRP model was calibrated to minimize the discrepancy between the precipitation depth statistics between the observed and MBLRP-generated precipitation time series. These...
Stochastic Model of TCP SYN Attacks
Directory of Open Access Journals (Sweden)
Simona Ramanauskaitė
2011-08-01
Full Text Available A great proportion of essential services are moving into internet space making the threat of DoS attacks even more actual. To estimate the real risk of some kind of denial of service (DoS attack in real world is difficult, but mathematical and software models make this task easier. In this paper we overview the ways of implementing DoS attack models and offer a stochastic model of SYN flooding attack. It allows evaluating the potential threat of SYN flooding attacks, taking into account both the legitimate system flow as well as the possible attack power. At the same time we can assess the effect of such parameters as buffer capacity, open connection storage in the buffer or filtering efficiency on the success of different SYN flooding attacks. This model can be used for other type of memory depletion denial of service attacks.Article in Lithuanian
Stochastic approaches to inflation model building
International Nuclear Information System (INIS)
Ramirez, Erandy; Liddle, Andrew R.
2005-01-01
While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered
Stochastic modeling of financial electricity contracts
International Nuclear Information System (INIS)
Benth, Fred Espen; Koekebakker, Steen
2008-01-01
We discuss the modeling of electricity contracts traded in many deregulated power markets. These forward/futures type contracts deliver (either physically or financially) electricity over a specified time period, and is frequently referred to as swaps since they in effect represent an exchange of fixed for floating electricity price. We propose to use the Heath-Jarrow-Morton approach to model swap prices since the notion of a spot price is not easily defined in these markets. For general stochastic dynamical models, we connect the spot price, the instantaneous-delivery forward price and the swap price, and analyze two different ways to apply the Heath-Jarrow-Morton approach to swap pricing: Either one specifies a dynamics for the non-existing instantaneous-delivery forwards and derives the implied swap dynamics, or one models directly on the swaps. The former is shown to lead to quite complicated stochastic models for the swap price, even when the forward dynamics is simple. The latter has some theoretical problems due to a no-arbitrage condition that has to be satisfied for swaps with overlapping delivery periods. To overcome this problem, a practical modeling approach is analyzed. The market is supposed only to consist of non-overlapping swaps, and these are modelled directly. A thorough empirical study is performed using data collected from Nord Pool. Our investigations demonstrate that it is possible to state reasonable models for the swap price dynamics which is analytically tractable for risk management and option pricing purposes, however, this is an area of further research. (author)
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Stochastic modeling of thermal fatigue crack growth
Radu, Vasile
2015-01-01
The book describes a systematic stochastic modeling approach for assessing thermal-fatigue crack-growth in mixing tees, based on the power spectral density of temperature fluctuation at the inner pipe surface. It shows the development of a frequency-temperature response function in the framework of single-input, single-output (SISO) methodology from random noise/signal theory under sinusoidal input. The frequency response of stress intensity factor (SIF) is obtained by a polynomial fitting procedure of thermal stress profiles at various instants of time. The method, which takes into account the variability of material properties, and has been implemented in a real-world application, estimates the probabilities of failure by considering a limit state function and Monte Carlo analysis, which are based on the proposed stochastic model. Written in a comprehensive and accessible style, this book presents a new and effective method for assessing thermal fatigue crack, and it is intended as a concise and practice-or...
Iowa calibration of MEPDG performance prediction models.
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Hopf bifurcation of the stochastic model on business cycle
International Nuclear Information System (INIS)
Xu, J; Wang, H; Ge, G
2008-01-01
A stochastic model on business cycle was presented in thas paper. Simplifying the model through the quasi Hamiltonian theory, the Ito diffusion process was obtained. According to Oseledec multiplicative ergodic theory and singular boundary theory, the conditions of local and global stability were acquired. Solving the stationary FPK equation and analyzing the stationary probability density, the stochastic Hopf bifurcation was explained. The result indicated that the change of parameter awas the key factor to the appearance of the stochastic Hopf bifurcation
Double diffusivity model under stochastic forcing
Chattopadhyay, Amit K.; Aifantis, Elias C.
2017-05-01
The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into
Stochastic stability and bifurcation in a macroeconomic model
International Nuclear Information System (INIS)
Li Wei; Xu Wei; Zhao Junfeng; Jin Yanfei
2007-01-01
On the basis of the work of Goodwin and Puu, a new business cycle model subject to a stochastically parametric excitation is derived in this paper. At first, we reduce the model to a one-dimensional diffusion process by applying the stochastic averaging method of quasi-nonintegrable Hamiltonian system. Secondly, we utilize the methods of Lyapunov exponent and boundary classification associated with diffusion process respectively to analyze the stochastic stability of the trivial solution of system. The numerical results obtained illustrate that the trivial solution of system must be globally stable if it is locally stable in the state space. Thirdly, we explore the stochastic Hopf bifurcation of the business cycle model according to the qualitative changes in stationary probability density of system response. It is concluded that the stochastic Hopf bifurcation occurs at two critical parametric values. Finally, some explanations are given in a simply way on the potential applications of stochastic stability and bifurcation analysis
Stochastic inverse problems: Models and metrics
International Nuclear Information System (INIS)
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-01-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-01
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
Stochastic models for surface diffusion of molecules
Energy Technology Data Exchange (ETDEWEB)
Shea, Patrick, E-mail: patrick.shea@dal.ca; Kreuzer, Hans Jürgen [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, Nova Scotia B3H 3J5 (Canada)
2014-07-28
We derive a stochastic model for the surface diffusion of molecules, starting from the classical equations of motion for an N-atom molecule on a surface. The equation of motion becomes a generalized Langevin equation for the center of mass of the molecule, with a non-Markovian friction kernel. In the Markov approximation, a standard Langevin equation is recovered, and the effect of the molecular vibrations on the diffusion is seen to lead to an increase in the friction for center of mass motion. This effective friction has a simple form that depends on the curvature of the lowest energy diffusion path in the 3N-dimensional coordinate space. We also find that so long as the intramolecular forces are sufficiently strong, memory effects are usually not significant and the Markov approximation can be employed, resulting in a simple one-dimensional model that can account for the effect of the dynamics of the molecular vibrations on the diffusive motion.
Can Household Benefit from Stochastic Programming Models?
DEFF Research Database (Denmark)
Rasmussen, Kourosh Marjani; Madsen, Claus A.; Poulsen, Rolf
2014-01-01
The Danish mortgage market is large and sophisticated. However, most Danish mortgage banks advise private home-owners based on simple, if sensible, rules of thumb. In recent years a number of papers (from Nielsen and Poulsen in J Econ Dyn Control 28:1267–1289, 2004 over Rasmussen and Zenios in J...... Risk 10:1–18, 2007 to Pedersen et al. in Ann Oper Res, 2013) have suggested a model-based, stochastic programming approach to mortgage choice. This paper gives an empirical comparison of performance over the period 2000–2010 of the rules of thumb to the model-based strategies. While the rules of thumb.......3–0.9 %-points (depending on the borrower’s level of conservatism) compared to the rules of thumb without increasing the risk. The answer to the question in the title is thus affirmative....
An adaptive stochastic model for financial markets
International Nuclear Information System (INIS)
Hernández, Juan Antonio; Benito, Rosa Marı´a; Losada, Juan Carlos
2012-01-01
An adaptive stochastic model is introduced to simulate the behavior of real asset markets. The model adapts itself by changing its parameters automatically on the basis of the recent historical data. The basic idea underlying the model is that a random variable uniformly distributed within an interval with variable extremes can replicate the histograms of asset returns. These extremes are calculated according to the arrival of new market information. This adaptive model is applied to the daily returns of three well-known indices: Ibex35, Dow Jones and Nikkei, for three complete years. The model reproduces the histograms of the studied indices as well as their autocorrelation structures. It produces the same fat tails and the same power laws, with exactly the same exponents, as in the real indices. In addition, the model shows a great adaptation capability, anticipating the volatility evolution and showing the same volatility clusters observed in the assets. This approach provides a novel way to model asset markets with internal dynamics which changes quickly with time, making it impossible to define a fixed model to fit the empirical observations.
Logarithmic transformed statistical models in calibration
International Nuclear Information System (INIS)
Zeis, C.D.
1975-01-01
A general type of statistical model used for calibration of instruments having the property that the standard deviations of the observed values increase as a function of the mean value is described. The application to the Helix Counter at the Rocky Flats Plant is primarily from a theoretical point of view. The Helix Counter measures the amount of plutonium in certain types of chemicals. The method described can be used also for other calibrations. (U.S.)
Stochastic modeling of consumer preferences for health care institutions.
Malhotra, N K
1983-01-01
This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.
A stochastic modeling of recurrent measles epidemic | Kassem ...
African Journals Online (AJOL)
A simple stochastic mathematical model is developed and investigated for the dynamics of measles epidemic. The model, which is a multi-dimensional diffusion process, includes susceptible individuals, latent (exposed), infected and removed individuals. Stochastic effects are assumed to arise in the process of infection of ...
Review of "Stochastic Modelling for Systems Biology" by Darren Wilkinson
Directory of Open Access Journals (Sweden)
Bullinger Eric
2006-12-01
Full Text Available Abstract "Stochastic Modelling for Systems Biology" by Darren Wilkinson introduces the peculiarities of stochastic modelling in biology. This book is particularly suited to as a textbook or for self-study, and for readers with a theoretical background.
Approximate models for broken clouds in stochastic radiative transfer theory
International Nuclear Information System (INIS)
Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas
2014-01-01
This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models
Interplanetary Alfvenic fluctuations: A stochastic model
International Nuclear Information System (INIS)
Barnes, A.
1981-01-01
The strong alignment of the average directions of minimum magnetic variance and mean magnetic field in interplanetary Alfvenic fluctuations is inconsistent with the usual wave-propagation models. We investigate the concept of minimum variance for nonplanar Alfvenic fluctuations in which the field direction varies stochastically. It is found that the tendency of the minimum variance and mean field directions to be aligned may be purely a consequence of the randomness of the field direction. In particular, a well-defined direction of minimum variance does not imply that the fluctuations are necessarily planar. The fluctuation power spectrum is a power law for frequencies much higher than the inverse of the correlation time. The probability distribution of directions a randomly fluctuating field of constant magnitude is calculated. A new approach for observational studies of interplanetary fluctuations is suggested
Spatial Stochastic Point Models for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Syversveen, Anne Randi
1997-12-31
The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.
Calibration of semi-stochastic procedure for simulating high-frequency ground motions
Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert
2013-01-01
Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw 100 km).
12th Workshop on Stochastic Models, Statistics and Their Applications
Rafajłowicz, Ewaryst; Szajowski, Krzysztof
2015-01-01
This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.
Stochastic modeling of sunshine number data
Energy Technology Data Exchange (ETDEWEB)
Brabec, Marek, E-mail: mbrabec@cs.cas.cz [Department of Nonlinear Modeling, Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodarenskou vezi 2, 182 07 Prague 8 (Czech Republic); Paulescu, Marius [Physics Department, West University of Timisoara, V. Parvan 4, 300223 Timisoara (Romania); Badescu, Viorel [Candida Oancea Institute, Polytechnic University of Bucharest, Spl. Independentei 313, 060042 Bucharest (Romania)
2013-11-13
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Stochastic modeling of sunshine number data
Brabec, Marek; Paulescu, Marius; Badescu, Viorel
2013-11-01
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Stochastic modeling of sunshine number data
International Nuclear Information System (INIS)
Brabec, Marek; Paulescu, Marius; Badescu, Viorel
2013-01-01
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Gompertzian stochastic model with delay effect to cervical cancer growth
International Nuclear Information System (INIS)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-01-01
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits
Gompertzian stochastic model with delay effect to cervical cancer growth
Energy Technology Data Exchange (ETDEWEB)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor and UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-06-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
Stochastic growth logistic model with aftereffect for batch fermentation process
International Nuclear Information System (INIS)
Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md
2014-01-01
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits
Stochastic growth logistic model with aftereffect for batch fermentation process
Energy Technology Data Exchange (ETDEWEB)
Rosli, Norhayati; Ayoubi, Tawfiqullah [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah; Rahman, Haliza Abdul [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia); Salleh, Madihah Md [Department of Biotechnology Industry, Faculty of Biosciences and Bioengineering, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)
2014-06-19
In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.
A stochastic model for early placental development.
Cotter, Simon L
2014-08-01
In the human, placental structure is closely related to placental function and consequent pregnancy outcome. Studies have noted abnormal placental shape in small-for-gestational-age infants which extends to increased lifetime risk of cardiovascular disease. The origins and determinants of placental shape are incompletely understood and are difficult to study in vivo. In this paper, we model the early development of the human placenta, based on the hypothesis that this is driven by a chemoattractant effect emanating from proximal spiral arteries in the decidua. We derive and explore a two-dimensional stochastic model, and investigate the effects of loss of spiral arteries in regions near to the cord insertion on the shape of the placenta. This model demonstrates that disruption of spiral arteries can exert profound effects on placental shape, particularly if this is close to the cord insertion. Thus, placental shape reflects the underlying maternal vascular bed. Abnormal placental shape may reflect an abnormal uterine environment, predisposing to pregnancy complications. Through statistical analysis of model placentas, we are able to characterize the probability that a given placenta grew in a disrupted environment, and even able to distinguish between different disruptions.
Stochastic model of the spinning electron
International Nuclear Information System (INIS)
Simaciu, I.; Borsos, Z.
2002-01-01
In Stochastic Electrodynamics (SED) it is demonstrated that electrostatic interaction is the result of the scattering of the Classical Zero-Point Field (CZPF) background by the charged particles. In such models, the electron is modelled as a two-dimensional oscillator, which interacts with the electric component of the CZPF background. The electron with spin is not only an electric monopole but also a magnetic dipole. The interaction of the spin electron with the CZPF background is not only electric but also magnetic. We calculate the scattering cross-section of magnetic dipole in the situation when a magnetic field, variable in time B arrow = B 0 arrow sin ωt, acts over the rigid magnetic dipole given by the symmetry of the model. The cross-section of a magnetic dipole σ m must be equal to the cross-section of an electric monopole σ e . This equality between σ m and σ e cross-sections is motivated, too, by the fact that, in the model of the two-dimensional oscillator, the electric charge q e has the motion speed c. (authors)
Stochastic modelling of two-phase flows including phase change
International Nuclear Information System (INIS)
Hurisse, O.; Minier, J.P.
2011-01-01
Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)
Survival Analysis of a Nonautonomous Logistic Model with Stochastic Perturbation
Directory of Open Access Journals (Sweden)
Chun Lu
2012-01-01
Full Text Available Taking white noise into account, a stochastic nonautonomous logistic model is proposed and investigated. Sufficient conditions for extinction, nonpersistence in the mean, weak persistence, stochastic permanence, and global asymptotic stability are established. Moreover, the threshold between weak persistence and extinction is obtained. Finally, we introduce some numerical simulink graphics to illustrate our main results.
On the small-time behavior of stochastic logistic models
Directory of Open Access Journals (Sweden)
Dung Tien Nguyen
2017-09-01
Full Text Available In this paper we investigate the small-time behaviors of the solution to a stochastic logistic model. The obtained results allow us to estimate the number of individuals in the population and can be used to study stochastic prey-predator systems.
Analysis and reconstruction of stochastic coupled map lattice models
International Nuclear Information System (INIS)
Coca, Daniel; Billings, Stephen A.
2003-01-01
The Letter introduces a general stochastic coupled lattice map model together with an algorithm to estimate the nodal equations involved based only on a small set of observable variables and in the presence of stochastic perturbations. More general forms of the Frobenius-Perron and the transfer operators, which describe the evolution of densities under the action of the CML transformation, are derived
A probabilistic graphical model based stochastic input model construction
International Nuclear Information System (INIS)
Wan, Jiang; Zabaras, Nicholas
2014-01-01
Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media
Stochastic modeling of virus capsid assembly pathways
Schwartz, Russell
2009-03-01
Virus capsids have become a key model system for understanding self-assembly due to their high complexity, robust and efficient assembly processes, and experimental tractability. Our ability to directly examine and manipulate capsid assembly kinetics in detail nonetheless remains limited, creating a need for computer models that can infer experimentally inaccessible features of the assembly process and explore the effects of hypothetical manipulations on assembly trajectories. We have developed novel algorithms for stochastic simulation of capsid assembly [1,2] that allow us to model capsid assembly over broad parameter spaces [3]. We apply these methods to study the nature of assembly pathway control in virus capsids as well as their sensitivity to assembly conditions and possible experimental interventions. [4pt] [1] F. Jamalyaria, R. Rohlfs, and R. Schwartz. J Comp Phys 204, 100 (2005). [0pt] [2] N. Misra and R. Schwartz. J Chem Phys 129, in press (2008). [0pt] [3] B. Sweeney, T. Zhang, and R. Schwartz. Biophys J 94, 772 (2008).
Systematic parameter inference in stochastic mesoscopic modeling
Energy Technology Data Exchange (ETDEWEB)
Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Calibration of CORSIM models under saturated traffic flow conditions.
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
A hierarchical stochastic model for bistable perception.
Directory of Open Access Journals (Sweden)
Stefan Albert
2017-11-01
Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group
A hierarchical stochastic model for bistable perception.
Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby
2017-11-01
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to
Stochastic modeling of wetland-groundwater systems
Bertassello, Leonardo Enrico; Rao, P. Suresh C.; Park, Jeryang; Jawitz, James W.; Botter, Gianluca
2018-02-01
Modeling and data analyses were used in this study to examine the temporal hydrological variability in geographically isolated wetlands (GIWs), as influenced by hydrologic connectivity to shallow groundwater, wetland bathymetry, and subject to stochastic hydro-climatic forcing. We examined the general case of GIWs coupled to shallow groundwater through exfiltration or infiltration across wetland bottom. We also examined limiting case with the wetland stage as the local expression of the shallow groundwater. We derive analytical expressions for the steady-state probability density functions (pdfs) for wetland water storage and stage using few, scaled, physically-based parameters. In addition, we analyze the hydrologic crossing time properties of wetland stage, and the dependence of the mean hydroperiod on climatic and wetland morphologic attributes. Our analyses show that it is crucial to account for shallow groundwater connectivity to fully understand the hydrologic dynamics in wetlands. The application of the model to two different case studies in Florida, jointly with a detailed sensitivity analysis, allowed us to identify the main drivers of hydrologic dynamics in GIWs under different climate and morphologic conditions.
Reversibility in Quantum Models of Stochastic Processes
Gier, David; Crutchfield, James; Mahoney, John; James, Ryan
Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.
Stochastic Modeling of Wind Derivatives in Energy Markets
Directory of Open Access Journals (Sweden)
Fred Espen Benth
2018-05-01
Full Text Available We model the logarithm of the spot price of electricity with a normal inverse Gaussian (NIG process and the wind speed and wind power production with two Ornstein–Uhlenbeck processes. In order to reproduce the correlation between the spot price and the wind power production, namely between a pure jump process and a continuous path process, respectively, we replace the small jumps of the NIG process by a Brownian term. We then apply our models to two different problems: first, to study from the stochastic point of view the income from a wind power plant, as the expected value of the product between the electricity spot price and the amount of energy produced; then, to construct and price a European put-type quanto option in the wind energy markets that allows the buyer to hedge against low prices and low wind power production in the plant. Calibration of the proposed models and related price formulas is also provided, according to specific datasets.
Uncertainty modelling of atmospheric dispersion by stochastic ...
Indian Academy of Sciences (India)
by stochastic response surface method under aleatory ... 3Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400 085, India e-mail: ...... Brandimarte P 2011 Quantitative methods: An introduction for business management.
Stochastic models for predicting environmental impact in aquatic ecosystems
International Nuclear Information System (INIS)
Stewart-Oaten, A.
1986-01-01
The purpose of stochastic predictions are discussed in relation to the environmental impacts of nuclear power plants on aquatic ecosystems. One purpose is to aid in making rational decisions about whether a power plant should be built, where, and how it should be designed. The other purpose is to check on the models themselves in the light of what eventually happens. The author discusses the role or statistical decision theory in the decision-making problem. Various types of stochastic models and their problems are presented. In addition some suggestions are made for generating usable stochastic models, and checking and improving on them. 12 references
Stochastic Watershed Models for Risk Based Decision Making
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
On changes of measure in stochastic volatility models
Directory of Open Access Journals (Sweden)
Bernard Wong
2006-01-01
models. This had led many researchers to “assume the condition away,” even though the condition is not innocuous, and nonsensical results can occur if it is in fact not satisfied. We provide an applicable theorem to check the conditions for a general class of Markovian stochastic volatility models. As an example we will also provide a detailed analysis of the Stein and Stein and Heston stochastic volatility models.
Fitting PAC spectra with stochastic models: PolyPacFit
Energy Technology Data Exchange (ETDEWEB)
Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)
2010-04-15
PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.
Index Option Pricing Models with Stochastic Volatility and Stochastic Interest Rates
Jiang, G.J.; van der Sluis, P.J.
2000-01-01
This paper specifies a multivariate stochastic volatility (SV) model for the S&P500 index and spot interest rate processes. We first estimate the multivariate SV model via the efficient method of moments (EMM) technique based on observations of underlying state variables, and then investigate the
Markov Chain Models for the Stochastic Modeling of Pitting Corrosion
Directory of Open Access Journals (Sweden)
A. Valor
2013-01-01
Full Text Available The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure birth Markov process is used to model external pitting corrosion in underground pipelines. A closed-form solution of the system of Kolmogorov's forward equations is used to describe the transition probability function in a discrete pit depth space. The transition probability function is identified by correlating the stochastic pit depth mean with the empirical deterministic mean. In the second model, the distribution of maximum pit depths in a pitting experiment is successfully modeled after the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time is simulated as the realization of a Weibull process. Pit growth is simulated using a nonhomogeneous Markov process. An analytical solution of Kolmogorov's system of equations is also found for the transition probabilities from the first Markov state. Extreme value statistics is employed to find the distribution of maximum pit depths.
Some Remarks on Stochastic Versions of the Ramsey Growth Model
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2012-01-01
Roč. 19, č. 29 (2012), s. 139-152 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : Economic dynamics * Ramsey growth model with disturbance * stochastic dynamic programming * multistage stochastic programs Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-some remarks on stochastic versions of the ramsey growth model.pdf
Stochastic Modeling and Analysis of Power System with Renewable Generation
DEFF Research Database (Denmark)
Chen, Peiyuan
Unlike traditional fossil-fuel based power generation, renewable generation such as wind power relies on uncontrollable prime sources such as wind speed. Wind speed varies stochastically, which to a large extent determines the stochastic behavior of power generation from wind farms...... that such a stochastic model can be used to simulate the effect of load management on the load duration curve. As CHP units are turned on and off by regulating power, CHP generation has discrete output and thus can be modeled by a transition matrix based discrete Markov chain. As the CHP generation has a strong diurnal...
Calibration models for high enthalpy calorimetric probes.
Kannel, A
1978-07-01
The accuracy of gas-aspirated liquid-cooled calorimetric probes used for measuring the enthalpy of high-temperature gas streams is studied. The error in the differential temperature measurements caused by internal and external heat transfer interactions is considered and quantified by mathematical models. The analysis suggests calibration methods for the evaluation of dimensionless heat transfer parameters in the models, which then can give a more accurate value for the enthalpy of the sample. Calibration models for four types of calorimeters are applied to results from the literature and from our own experiments: a circular slit calorimeter developed by the author, single-cooling jacket probe, double-cooling jacket probe, and split-flow cooling jacket probe. The results show that the models are useful for describing and correcting the temperature measurements.
Stochastic models for predicting pitting corrosion damage of HLRW containers
International Nuclear Information System (INIS)
Henshall, G.A.
1991-10-01
Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed
Markov Chain Models for the Stochastic Modeling of Pitting Corrosion
Valor, A.; Caleyo, F.; Alfonso, L.; Velázquez, J. C.; Hallen, J. M.
2013-01-01
The stochastic nature of pitting corrosion of metallic structures has been widely recognized. It is assumed that this kind of deterioration retains no memory of the past, so only the current state of the damage influences its future development. This characteristic allows pitting corrosion to be categorized as a Markov process. In this paper, two different models of pitting corrosion, developed using Markov chains, are presented. Firstly, a continuous-time, nonhomogeneous linear growth (pure ...
Calibration process of highly parameterized semi-distributed hydrological model
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group
Stochastic Modelling of Shiroro River Stream flow Process
Musa, J. J
2013-01-01
Economists, social scientists and engineers provide insights into the drivers of anthropogenic climate change and the options for adaptation and mitigation, and yet other scientists, including geographers and biologists, study the impacts of climate change. This project concentrates mainly on the discharge from the Shiroro River. A stochastic approach is presented for modeling a time series by an Autoregressive Moving Average model (ARMA). The development and use of a stochastic stream flow m...
Stochastic higher spin six vertex model and Macdonald measures
Borodin, Alexei
2018-02-01
We prove an identity that relates the q-Laplace transform of the height function of a (higher spin inhomogeneous) stochastic six vertex model in a quadrant on one side and a multiplicative functional of a Macdonald measure on the other. The identity is used to prove the GUE Tracy-Widom asymptotics for two instances of the stochastic six vertex model via asymptotic analysis of the corresponding Schur measures.
Computational stochastic model of ions implantation
Energy Technology Data Exchange (ETDEWEB)
Zmievskaya, Galina I., E-mail: zmi@gmail.ru; Bondareva, Anna L., E-mail: bal310775@yandex.ru [M.V. Keldysh Institute of Applied Mathematics RAS, 4,Miusskaya sq., 125047 Moscow (Russian Federation); Levchenko, Tatiana V., E-mail: tatlevchenko@mail.ru [VNII Geosystem Russian Federal Center, Varshavskoye roadway, 8, Moscow (Russian Federation); Maino, Giuseppe, E-mail: giuseppe.maino@enea.it [Scuola di Lettere e BeniCulturali, University di Bologna, sede di Ravenna, via Mariani 5, 48100 Ravenna (Italy)
2015-03-10
Implantation flux ions into crystal leads to phase transition /PT/ 1-st kind. Damaging lattice is associated with processes clustering vacancies and gaseous bubbles as well their brownian motion. System of stochastic differential equations /SDEs/ Ito for evolution stochastic dynamical variables corresponds to the superposition Wiener processes. The kinetic equations in partial derivatives /KE/, Kolmogorov-Feller and Einstein-Smolukhovskii, were formulated for nucleation into lattice of weakly soluble gases. According theory, coefficients of stochastic and kinetic equations uniquely related. Radiation stimulated phase transition are characterized by kinetic distribution functions /DFs/ of implanted clusters versus their sizes and depth of gas penetration into lattice. Macroscopic parameters of kinetics such as the porosity and stress calculated in thin layers metal/dielectric due to Xe{sup ++} irradiation are attracted as example. Predictions of porosity, important for validation accumulation stresses in surfaces, can be applied at restoring of objects the cultural heritage.
SURFplus Model Calibration for PBX 9502
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-06
The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent of the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.
Grid based calibration of SWAT hydrological models
Directory of Open Access Journals (Sweden)
D. Gorgan
2012-07-01
Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.
Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling
Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.
2005-12-01
Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.
International Nuclear Information System (INIS)
Schoefs, Franck; Chevreuil, Mathilde; Pasqualini, Olivier; Cazuguel, Mikaël
2016-01-01
Welded joints are used in various structures and infrastructures like bridges, ships and offshore structures, and are submitted to cyclic stresses. Their fatigue behaviour is an industrial key issue to deal with and still offers original research subjects. One of the available methods relies on the computing of the stress concentration factor. Even if some studies were previously driven to evaluate this factor onto some cases of welded structures, the shape of the weld joint is generally idealized through a deterministic parametric geometry. Previous experimental works however have shown that this shape plays a key role in the lifetime assessment. We propose in this paper a methodology for computing the stress concentration factor in presence of random geometries of welded joints. In view to make the results available by engineers, this method merges stochastic computation and semi-probabilistic analysis by computing partial safety factors with a dedicated method. - Highlights: • Numerical computation of stress concentration factor with random geometry of weld. • Real data are used for probabilistic modelling. • Identification of partial safety factor from SFEM computation in case of random geometries.
High Accuracy Transistor Compact Model Calibrations
Energy Technology Data Exchange (ETDEWEB)
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
Composed particle model in stochastic electrodynamics
International Nuclear Information System (INIS)
Brunini, S.A.
1985-01-01
We analyse the statistical properties of the non-relativistic motion of a particle that has two constituents having finite nasses and charges. The main interaction is in contact with thermal and zero point radiation of Stochastic Electrodynamics. (M.W.O.) [pt
Gradient-based model calibration with proxy-model assistance
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Directory of Open Access Journals (Sweden)
Elston Timothy C
2004-03-01
Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Multivariate moment closure techniques for stochastic kinetic models
International Nuclear Information System (INIS)
Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.
2015-01-01
Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs
Multivariate moment closure techniques for stochastic kinetic models
Energy Technology Data Exchange (ETDEWEB)
Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)
2015-09-07
Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.
Stochastic bifurcation in a model of love with colored noise
Yue, Xiaokui; Dai, Honghua; Yuan, Jianping
2015-07-01
In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.
A complementarity model for solving stochastic natural gas market equilibria
International Nuclear Information System (INIS)
Zhuang Jifang; Gabriel, Steven A.
2008-01-01
This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems
Tsunamis: stochastic models of occurrence and generation mechanisms
Geist, Eric L.; Oglesby, David D.
2014-01-01
The devastating consequences of the 2004 Indian Ocean and 2011 Japan tsunamis have led to increased research into many different aspects of the tsunami phenomenon. In this entry, we review research related to the observed complexity and uncertainty associated with tsunami generation, propagation, and occurrence described and analyzed using a variety of stochastic methods. In each case, seismogenic tsunamis are primarily considered. Stochastic models are developed from the physical theories that govern tsunami evolution combined with empirical models fitted to seismic and tsunami observations, as well as tsunami catalogs. These stochastic methods are key to providing probabilistic forecasts and hazard assessments for tsunamis. The stochastic methods described here are similar to those described for earthquakes (Vere-Jones 2013) and volcanoes (Bebbington 2013) in this encyclopedia.
A complementarity model for solving stochastic natural gas market equilibria
International Nuclear Information System (INIS)
Jifang Zhuang; Gabriel, S.A.
2008-01-01
This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems. (author)
Electroweak Calibration of the Higgs Characterization Model
CERN. Geneva
2015-01-01
I will present the preliminary results of histogram fits using the Higgs Combine histogram fitting package. These fits can be used to estimate the effects of electroweak contributions to the p p -> H mu+ mu- Higgs production channel and calibrate Beyond Standard Model (BSM) simulations which ignore these effects. I will emphasize my findings' significance in the context of other research here at CERN and in the broader world of high energy physics.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Analysis of dynamic regimes in stochastically forced Kaldor model
International Nuclear Information System (INIS)
Bashkirtseva, Irina; Ryazanova, Tatyana; Ryashko, Lev
2015-01-01
We consider the business cycle Kaldor model forced by random noise. Detailed parametric analysis of deterministic system is carried out and zones of coexisting stable equilibrium and stable limit cycle are found. Noise-induced transitions between these attractors are studied using stochastic sensitivity function technique and confidence domains method. Critical values of noise intensity corresponding to noise-induced transitions “equilibrium → cycle” and “cycle → equilibrium” are estimated. Dominants in combined stochastic regimes are discussed.
A stochastic analysis for a phytoplankton-zooplankton model
International Nuclear Information System (INIS)
Ge, G; Wang, H-L; Xu, J
2008-01-01
A simple phytoplankton-zooplankton nonlinear dynamical model was proposed to study the coexistence of all the species and a Hopf bifurcation was observed. In order to study the effect of environmental robustness on this system, we have stochastically perturbed the system with respect to white noise around its positive interior equilibrium. We have observed that the system remains stochastically stable around the positive equilibrium for same parametric values in the deterministic situation
Ideas for fast accelerator model calibration
International Nuclear Information System (INIS)
Corbett, J.
1997-05-01
With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10 3 model variables to fit the order of 10 5 measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach
Analysis of stochastic effects in Kaldor-type business cycle discrete model
Bashkirtseva, Irina; Ryashko, Lev; Sysolyatina, Anna
2016-07-01
We study nonlinear stochastic phenomena in the discrete Kaldor model of business cycles. A numerical parametric analysis of stochastically forced attractors (equilibria, closed invariant curves, discrete cycles) of this model is performed using the stochastic sensitivity functions technique. A spatial arrangement of random states in stochastic attractors is modeled by confidence domains. The phenomenon of noise-induced transitions ;chaos-order; is discussed.
The multivariate supOU stochastic volatility model
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Stelzer, Robert
Using positive semidefinite supOU (superposition of Ornstein-Uhlenbeck type) processes to describe the volatility, we introduce a multivariate stochastic volatility model for financial data which is capable of modelling long range dependence effects. The finiteness of moments and the second order...... structure of the volatility, the log returns, as well as their "squares" are discussed in detail. Moreover, we give several examples in which long memory effects occur and study how the model as well as the simple Ornstein-Uhlenbeck type stochastic volatility model behave under linear transformations....... In particular, the models are shown to be preserved under invertible linear transformations. Finally, we discuss how (sup)OU stochastic volatility models can be combined with a factor modelling approach....
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Stochastic line motion and stochastic flux conservation for nonideal hydromagnetic models
International Nuclear Information System (INIS)
Eyink, Gregory L.
2009-01-01
We prove that smooth solutions of nonideal (viscous and resistive) incompressible magnetohydrodynamic (MHD) equations satisfy a stochastic law of flux conservation. This property implies that the magnetic flux through a surface is equal to the average of the magnetic fluxes through an ensemble of surfaces advected backward in time by the plasma velocity perturbed with a random white noise. Our result is an analog of the well-known Alfven theorem of ideal MHD and is valid for any value of the magnetic Prandtl number. A second stochastic conservation law is shown to hold at unit Prandtl number, a random version of the generalized Kelvin theorem derived by Bekenstein and Oron for ideal MHD. These stochastic conservation laws are not only shown to be consequences of the nonideal MHD equations but are proved in fact to be equivalent to those equations. We derive similar results for two more refined hydromagnetic models, Hall MHD and the two-fluid plasma model, still assuming incompressible velocities and isotropic transport coefficients. Finally, we use these results to discuss briefly the infinite-Reynolds-number limit of hydromagnetic turbulence and to support the conjecture that flux conservation remains stochastic in that limit.
International Nuclear Information System (INIS)
Cornic, Philippe; Le Besnerais, Guy; Champagnat, Frédéric; Illoul, Cédric; Cheminet, Adam; Le Sant, Yves; Leclaire, Benjamin
2016-01-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data. (paper)
Stochastic models to simulate paratuberculosis in dairy herds
DEFF Research Database (Denmark)
Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad
2011-01-01
Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Stochastic model of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-01-01
Full Text Available If demand is known for the whole planning period (complete information, then this type of demand or a supply system is deterministic. In the simplest cases, the demand per time unit is constant. If demand levels change over time following a precisely determined and pre-known principle, this type of demand is also classified as deterministic. This quality of demand is very rare. In most cases demand is the product of a process, for example TMS maintenance, whose progression cannot be predicted due to a number of factors influencing the process and causing random demand changes. In this case, a supply system must function according to the complete information and with a certain degree of uncertainty. In cases when demand may be defined by some of the laws of the probability theory, we are talking about stochastic demand and a stochastic supply system. Demand can be described by mathematical expectation, mathematical expectation and standard deviation, probability distribution or as a random process. However, there is usually a need for the most complex description, i.e. the complex random process because both intensity of demand and the probability distribution change during the observed intervals. The level of temporal (dynamic series is traditionally considered as a complex phenomenon consisting of four components: - basic tendency of phenomenon development - cyclical impact (long-term, 'ancient' - seasonal effects - random fluctuations. The basic tendency of phenomenon development means a long-term evolution of phenomena. A function that expresses the trajectory of changes of the basic tendency of a phenomenon development in the form of the equation is called a trend. Often, the trend involves time regression; i.e. the coefficients of the proposed functions are often determined by the least squares method. To roughly determine the coefficients of the proposed function, the sum of three and three-point methods are also used. After checking the
Electricity Market Stochastic Dynamic Model and Its Mean Stability Analysis
Directory of Open Access Journals (Sweden)
Zhanhui Lu
2014-01-01
Full Text Available Based on the deterministic dynamic model of electricity market proposed by Alvarado, a stochastic electricity market model, considering the random nature of demand sides, is presented in this paper on the assumption that generator cost function and consumer utility function are quadratic functions. The stochastic electricity market model is a generalization of the deterministic dynamic model. Using the theory of stochastic differential equations, stochastic process theory, and eigenvalue techniques, the determining conditions of the mean stability for this electricity market model under small Gauss type random excitation are provided and testified theoretically. That is, if the demand elasticity of suppliers is nonnegative and the demand elasticity of consumers is negative, then the stochastic electricity market model is mean stable. It implies that the stability can be judged directly by initial data without any computation. Taking deterministic electricity market data combined with small Gauss type random excitation as numerical samples to interpret random phenomena from a statistical perspective, the results indicate the conclusions above are correct, valid, and practical.
Dynamic stochastic accumulation model with application to pension savings management
Directory of Open Access Journals (Sweden)
Melicherčik Igor
2010-01-01
Full Text Available We propose a dynamic stochastic accumulation model for determining optimal decision between stock and bond investments during accumulation of pension savings. Stock prices are assumed to be driven by the geometric Brownian motion. Interest rates are modeled by means of the Cox-Ingersoll-Ross model. The optimal decision as a solution to the corresponding dynamic stochastic program is a function of the duration of saving, the level of savings and the short rate. Qualitative and quantitative properties of the optimal solution are analyzed. The model is tested on the funded pillar of the Slovak pension system. The results are calculated for various risk preferences of a saver.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.
Elkantassi, Soumaya
2017-10-03
Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.
Stochastic description of heterogeneities of permeability within groundwater flow models
International Nuclear Information System (INIS)
Cacas, M.C.; Lachassagne, P.; Ledoux, E.; Marsily, G. de
1991-01-01
In order to model radionuclide migration in the geosphere realistically at the field scale, the hydrogeologist needs to be able to simulate groundwater flow in heterogeneous media. Heterogeneity of the medium can be described using a stochastic approach, that affects the way in which a flow model is formulated. In this paper, we discuss the problems that we have encountered in modelling both continuous and fractured media. The stochastic approach leads to a methodology that enables local measurements of permeability to be integrated into a model which gives a good prediction of groundwater flow on a regional scale. 5 Figs.; 8 Refs
INFERENCE AND SENSITIVITY IN STOCHASTIC WIND POWER FORECAST MODELS.
Elkantassi, Soumaya; Kalligiannaki, Evangelia; Tempone, Raul
2017-01-01
Reliable forecasting of wind power generation is crucial to optimal control of costs in generation of electricity with respect to the electricity demand. Here, we propose and analyze stochastic wind power forecast models described by parametrized stochastic differential equations, which introduce appropriate fluctuations in numerical forecast outputs. We use an approximate maximum likelihood method to infer the model parameters taking into account the time correlated sets of data. Furthermore, we study the validity and sensitivity of the parameters for each model. We applied our models to Uruguayan wind power production as determined by historical data and corresponding numerical forecasts for the period of March 1 to May 31, 2016.
Stochastic models of the Social Security trust funds.
Burdick, Clark; Manchester, Joyce
Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.
Seepage Calibration Model and Seepage Testing Data
International Nuclear Information System (INIS)
Dixon, P.
2004-01-01
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M and O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty
Characterizing economic trends by Bayesian stochastic model specification search
DEFF Research Database (Denmark)
Grassi, Stefano; Proietti, Tommaso
We extend a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. In particular, we focus on autoregressive models with possibly time-varying intercept and slope and decide on ...
Haberlandt, Uwe; Wallner, Markus; Radtke, Imke
2013-04-01
Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.
Introduction to modeling and analysis of stochastic systems
Kulkarni, V G
2011-01-01
This is an introductory-level text on stochastic modeling. It is suited for undergraduate students in engineering, operations research, statistics, mathematics, actuarial science, business management, computer science, and public policy. It employs a large number of examples to teach the students to use stochastic models of real-life systems to predict their performance, and use this analysis to design better systems. The book is devoted to the study of important classes of stochastic processes: discrete and continuous time Markov processes, Poisson processes, renewal and regenerative processes, semi-Markov processes, queueing models, and diffusion processes. The book systematically studies the short-term and the long-term behavior, cost/reward models, and first passage times. All the material is illustrated with many examples, and case studies. The book provides a concise review of probability in the appendix. The book emphasizes numerical answers to the problems. A collection of MATLAB programs to accompany...
Stochastic dynamic modeling of regular and slow earthquakes
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization
Directory of Open Access Journals (Sweden)
Xuefeng Yan
2013-01-01
Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
A stochastic model for intermittent search strategies
International Nuclear Information System (INIS)
Benichou, O; Coppey, M; Moreau, M; Suet, P H; Voituriez, R
2005-01-01
It is often necessary, in scientific or everyday life problems, to find a randomly hidden target. What is then the optimal strategy to reach it as rapidly as possible? In this article, we develop a stochastic theory for intermittent search behaviours, which are often observed: the searcher alternates phases of intensive search and slow motion with fast displacements. The first results of this theory have already been announced recently. Here we provide a detailed presentation of the theory, as well as the full derivation of the results. Furthermore, we explicitly discuss the minimization of the time needed to find the target
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
The critical domain size of stochastic population models.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2017-02-01
Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.
Stochastic fractional differential equations: Modeling, method and analysis
International Nuclear Information System (INIS)
Pedjeu, Jean-C.; Ladde, Gangaram S.
2012-01-01
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model described by a system of multi-time scale stochastic differential equations is formulated. The classical Picard–Lindelöf successive approximations scheme is applied to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this leads to the problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations of Itô–Doob type. Finally, to illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are outlined.
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
Stochastic processes analysis in nuclear reactor using ARMA models
International Nuclear Information System (INIS)
Zavaljevski, N.
1990-01-01
The analysis of ARMA model derived from general stochastic state equations of nuclear reactor is given. The dependence of ARMA model parameters on the main physical characteristics of RB nuclear reactor in Vinca is presented. Preliminary identification results are presented, observed discrepancies between theory and experiment are explained and the possibilities of identification improvement are anticipated. (author)
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement...
Powering stochastic reliability models by discrete event simulation
DEFF Research Database (Denmark)
Kozine, Igor; Wang, Xiaoyun
2012-01-01
it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...
Stochastic Modelling of the Diffusion Coefficient for Concrete
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In the paper, a new stochastic modelling of the diffusion coefficient D is presented. The modelling is based on physical understanding of the diffusion process and on some recent experimental results. The diffusion coefficients D is strongly dependent on the w/c ratio and the temperature....
Stochastic models for transport in a fluidized bed
Dehling, H.G; Hoffmann, A.C; Stuut, H.W.
1999-01-01
In this paper we study stochastic models for the transport of particles in a fluidized bed reactor and compute the associated residence time distribution (RTD). Our main model is basically a diffusion process in [0;A] with reflecting/absorbing boundary conditions, modified by allowing jumps to the
Stochastic Robust Mathematical Programming Model for Power System Optimization
Energy Technology Data Exchange (ETDEWEB)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Calibration of hydrological model with programme PEST
Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca
2016-04-01
PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Estimation of Stochastic Volatility Models by Nonparametric Filtering
DEFF Research Database (Denmark)
Kanaya, Shin; Kristensen, Dennis
2016-01-01
/estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...
International Nuclear Information System (INIS)
Greacen, E.L.; Correll, R.L.; Cunningham, R.B.; Johns, G.G.; Nicolls, K.D.
1981-01-01
Procedures common to different methods of calibration of neutron moisture meters are outlined and laboratory and field calibration methods compared. Gross errors which arise from faulty calibration techniques are described. The count rate can be affected by the dry bulk density of the soil, the volumetric content of constitutional hydrogen and other chemical components of the soil and soil solution. Calibration is further complicated by the fact that the neutron meter responds more strongly to the soil properties close to the detector and source. The differences in slope of calibration curves for different soils can be as much as 40%
Calibration of discrete element model parameters: soybeans
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Multi-scenario modelling of uncertainty in stochastic chemical systems
International Nuclear Information System (INIS)
Evans, R. David; Ricardez-Sandoval, Luis A.
2014-01-01
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo
International Nuclear Information System (INIS)
Laurens, J.M.
1985-01-01
A computer code was written to model food chains in order to estimate the internal and external doses, for stochastic and non-stochastic effects, on humans (adults and infants). Results are given for 67 radionuclides, for unit concentration in water (1 Bq/L) and in atmosphere (1 Bq/m 3 )
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-01-01
The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid
DEFF Research Database (Denmark)
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode
2009-01-01
are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE1 approximation......The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model...... development, J. Pharmacokinet. Pharmacodyn. 32 (February(l)) (2005) 109-141; C.W. Tornoe, R.V Overgaard, H. Agerso, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8...
Seepage Calibration Model and Seepage Testing Data
Energy Technology Data Exchange (ETDEWEB)
S. Finsterle
2004-09-02
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross
Seepage Calibration Model and Seepage Testing Data
International Nuclear Information System (INIS)
Finsterle, S.
2004-01-01
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model
Stochastic lattice model of synaptic membrane protein domains.
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A
2017-05-01
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.
Stochastic Modeling of Radioactive Material Releases
Energy Technology Data Exchange (ETDEWEB)
Andrus, Jason [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pope, Chad [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculates the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was
Stochastic Modeling of Radioactive Material Releases
International Nuclear Information System (INIS)
Andrus, Jason; Pope, Chad
2015-01-01
Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculates the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was
Threshold Dynamics of a Stochastic Chemostat Model with Two Nutrients and One Microorganism
Directory of Open Access Journals (Sweden)
Jian Zhang
2017-01-01
Full Text Available A new stochastic chemostat model with two substitutable nutrients and one microorganism is proposed and investigated. Firstly, for the corresponding deterministic model, the threshold for extinction and permanence of the microorganism is obtained by analyzing the stability of the equilibria. Then, for the stochastic model, the threshold of the stochastic chemostat for extinction and permanence of the microorganism is explored. Difference of the threshold of the deterministic model and the stochastic model shows that a large stochastic disturbance can affect the persistence of the microorganism and is harmful to the cultivation of the microorganism. To illustrate this phenomenon, we give some computer simulations with different intensity of stochastic noise disturbance.
Response spectrum analysis of a stochastic seismic model
International Nuclear Information System (INIS)
Kimura, Koji; Sakata, Masaru; Takemoto, Shinichiro.
1990-01-01
The stochastic response spectrum approach is presented for predicting the dynamic behavior of structures to earthquake excitation expressed by a random process, one of whose sample functions can be regarded as a recorded strong-motion earthquake accelerogram. The approach consists of modeling recorded ground motion by a random process and the root-mean-square response (rms) analysis of a single-degree-of-freedom system by using the moment equations method. The stochastic response spectrum is obtained as a plot of the maximum rms response versus the natural period of the system and is compared with the conventional response spectrum. (author)
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
The threshold of a stochastic SIQS epidemic model
Zhang, Xiao-Bing; Huo, Hai-Feng; Xiang, Hong; Shi, Qihong; Li, Dungang
2017-09-01
In this paper, we present the threshold of a stochastic SIQS epidemic model which determines the extinction and persistence of the disease. Furthermore, we find that noise can suppress the disease outbreak. Numerical simulations are also carried out to confirm the analytical results.
Stochasticity thresholds in the Fermi-Pasta-Ulam model
International Nuclear Information System (INIS)
Callegari, B.; Galgani, L.; Milan Univ.
1979-01-01
The authors consider the celebrated model of Fermi, Pasta and Ulam and give a numerical estimate for its thresholds of stochasticity, thus determining a critical energy as a function of the frequency of the corresponding oscillators. The results turn out to be qualitatively similar to those already obtained for a chain of particles with nearest-neighbour Lennard-Jones interaction potential. (author)
A stochastic large deformation model for computational anatomy
DEFF Research Database (Denmark)
Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru
2017-01-01
In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...
Stochasticity thresholds in the Fermi-Pasta-Ulam model
Energy Technology Data Exchange (ETDEWEB)
Callegari, B [Ferrara Univ. (Italy). Ist. di Matematica; Carotta, M C; Ferrario, C [Ferrara Univ. (Italy). Ist. di Fisica; Lo Vecchio, G [Ferrara Univ. (Italy). Ist. di Fisica; Gruppo Nazionale di Struttura della Materia, Ferrara (Italy)); Galgani, L [Milan Univ. (Italy). Ist. di Fisica; Milan Univ. (Italy). Ist. di Matematica)
1979-12-11
The authors consider the celebrated model of Fermi, Pasta and Ulam and give a numerical estimate for its thresholds of stochasticity, thus determining a critical energy as a function of the frequency of the corresponding oscillators. The results turn out to be qualitatively similar to those already obtained for a chain of particles with nearest-neighbour Lennard-Jones interaction potential.
Stochastic Online Learning in Dynamic Networks under Unknown Models
2016-08-02
The key is to develop online learning strategies at each individual node. Specifically, through local information exchange with its neighbors, each...infinitely repeated game with incomplete information and developed a dynamic pricing strategy referred to as Competitive and Cooperative Demand Learning...Stochastic Online Learning in Dynamic Networks under Unknown Models This research aims to develop fundamental theories and practical algorithms for
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
S. Peiris (Shelton); M. Asai (Manabu); M.J. McAleer (Michael)
2016-01-01
textabstractIn recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Model tracking dual stochastic controller design under irregular internal noises
International Nuclear Information System (INIS)
Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young
2006-01-01
Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation
Hydrological modeling using a multi-site stochastic weather generator
Weather data is usually required at several locations over a large watershed, especially when using distributed models for hydrological simulations. In many applications, spatially correlated weather data can be provided by a multi-site stochastic weather generator which considers the spatial correl...
Optimal Tax Reduction by Depreciation : A Stochastic Model
Berg, M.; De Waegenaere, A.M.B.; Wielhouwer, J.L.
1996-01-01
This paper focuses on the choice of a depreciation method, when trying to minimize the expected value of the present value of future tax payments.In a quite general model that allows for stochastic future cash- ows and a tax structure with tax brackets, we determine the optimal choice between the
A cavitation model based on Eulerian stochastic fields
Magagnato, F.; Dumond, J.
2013-12-01
Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic models in risk theory and management accounting
Brekelmans, R.C.M.
2000-01-01
This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest
A stochastic model for forecast consumption in master scheduling
Weeda, P.J.; Weeda, P.J.
1994-01-01
This paper describes a stochastic model for the reduction of the initial forecast in the Master Schedule (MS) of an MRP system during progress of time by the acceptance of customer orders. Results are given for the expectation and variance of the number of yet unknown deliveries as a function of
Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach
Maxwell, R.M.; Welty, C.; Harvey, R.W.
2007-01-01
Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured
Haberlandt, U.; Radtke, I.
2013-08-01
Derived flood frequency analysis allows to estimate design floods with hydrological modelling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices about precipitation input, discharge output and consequently regarding the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets. Event based and continuous observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in Northern Germany with the hydrological model HEC-HMS. The results show that: (i) the same type of precipitation input data should be used for calibration and application of the hydrological model, (ii) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, (iii) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.
Stochastic population oscillations in spatial predator-prey models
International Nuclear Information System (INIS)
Taeuber, Uwe C
2011-01-01
It is well-established that including spatial structure and stochastic noise in models for predator-prey interactions invalidates the classical deterministic Lotka-Volterra picture of neutral population cycles. In contrast, stochastic models yield long-lived, but ultimately decaying erratic population oscillations, which can be understood through a resonant amplification mechanism for density fluctuations. In Monte Carlo simulations of spatial stochastic predator-prey systems, one observes striking complex spatio-temporal structures. These spreading activity fronts induce persistent correlations between predators and prey. In the presence of local particle density restrictions (finite prey carrying capacity), there exists an extinction threshold for the predator population. The accompanying continuous non-equilibrium phase transition is governed by the directed-percolation universality class. We employ field-theoretic methods based on the Doi-Peliti representation of the master equation for stochastic particle interaction models to (i) map the ensuing action in the vicinity of the absorbing state phase transition to Reggeon field theory, and (ii) to quantitatively address fluctuation-induced renormalizations of the population oscillation frequency, damping, and diffusion coefficients in the species coexistence phase.
Stochastic volatility and stochastic leverage
DEFF Research Database (Denmark)
Veraart, Almut; Veraart, Luitgard A. M.
This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...
Stochastic and simulation models of maritime intercept operations capabilities
Sato, Hiroyuki
2005-01-01
The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...
Extinction threshold of a population in spatial and stochastic model
Soroka, Yevheniia; Rublyov, Bogdan
2016-01-01
In this study, spatial stochastic and logistic model (SSLM) describing dynamics of a population of a certain species was analysed. The behaviour of the extinction threshold as a function of model parameters was studied. More specifically, we studied how the critical values for the model parameters that separate the cases of extinction and persistence depend on the spatial scales of the competition and dispersal kernels. We compared the simulations and analytical results to examine if and how ...
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
Joint Pricing of VIX and SPX Options with Stochastic Volatility and Jump models
DEFF Research Database (Denmark)
Kokholm, Thomas; Stisen, Martin
2015-01-01
to existing literature, we derive numerically simpler VIX option and futures pricing formulas in the case of the SVJ model. Moreover, the paper is the first to study the pricing performance of three widely used models to SPX options and VIX derivatives.......With the existence of active markets for volatility derivatives and options on the underlying instrument, the need for models that are able to price these markets consistently has increased. Although pricing formulas for VIX and vanilla options are now available for commonly employed models...... and variance (SVJJ) are jointly calibrated to market quotes on SPX and VIX options together with VIX futures. The full flexibility of having jumps in both returns and volatility added to a stochastic volatility model is essential. Moreover, we find that the SVJJ model with the Feller condition imposed...
Application of Stochastic Partial Differential Equations to Reservoir Property Modelling
Potsepaev, R.
2010-09-06
Existing algorithms of geostatistics for stochastic modelling of reservoir parameters require a mapping (the \\'uvt-transform\\') into the parametric space and reconstruction of a stratigraphic co-ordinate system. The parametric space can be considered to represent a pre-deformed and pre-faulted depositional environment. Existing approximations of this mapping in many cases cause significant distortions to the correlation distances. In this work we propose a coordinate free approach for modelling stochastic textures through the application of stochastic partial differential equations. By avoiding the construction of a uvt-transform and stratigraphic coordinates, one can generate realizations directly in the physical space in the presence of deformations and faults. In particular the solution of the modified Helmholtz equation driven by Gaussian white noise is a zero mean Gaussian stationary random field with exponential correlation function (in 3-D). This equation can be used to generate realizations in parametric space. In order to sample in physical space we introduce a stochastic elliptic PDE with tensor coefficients, where the tensor is related to correlation anisotropy and its variation is physical space.
DEFF Research Database (Denmark)
Duun-Henriksen, Anne Katrine; Schmidt, S.; Nørgaard, K.
2013-01-01
extension incorporating exercise effects on insulin and glucose dynamics. Our model is constructed as a stochastic state space model consisting of a set of stochastic differential equations (SDEs). In a stochastic state space model, the residual error is split into random measurement error...
Predicting population extinction or disease outbreaks with stochastic models
Directory of Open Access Journals (Sweden)
Linda J. S. Allen
2017-01-01
Full Text Available Models of exponential growth, logistic growth and epidemics are common applications in undergraduate differential equation courses. The corresponding stochastic models are not part of these courses, although when population sizes are small their behaviour is often more realistic and distinctly different from deterministic models. For example, the randomness associated with births and deaths may lead to population extinction even in an exponentially growing population. Some background in continuous-time Markov chains and applications to populations, epidemics and cancer are presented with a goal to introduce this topic into the undergraduate mathematics curriculum that will encourage further investigation into problems on conservation, infectious diseases and cancer therapy. MATLAB programs for graphing sample paths of stochastic models are provided in the Appendix.
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Stochastic Differential Equation-Based Flexible Software Reliability Growth Model
Directory of Open Access Journals (Sweden)
P. K. Kapur
2009-01-01
Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.
Development of stochastic indicator models of lithology, Yucca Mountain, Nevada
International Nuclear Information System (INIS)
Rautman, C.A.; Robey, T.H.
1994-01-01
Indicator geostatistical techniques have been used to produce a number of fully three-dimensional stochastic simulations of large-scale lithologic categories at the Yucca Mountain site. Each realization reproduces the available drill hole data used to condition the simulation. Information is propagated away from each point of observation in accordance with a mathematical model of spatial continuity inferred through soft data taken from published geologic cross sections. Variations among the simulated models collectively represent uncertainty in the lithology at unsampled locations. These stochastic models succeed in capturing many major features of welded-nonwelded lithologic framework of Yucca Mountain. However, contacts between welded and nonwelded rock types for individual simulations appear more complex than suggested by field observation, and a number of probable numerical artifacts exist in these models. Many of the apparent discrepancies between the simulated models and the general geology of Yucca Mountain represent characterization uncertainty, and can be traced to the sparse site data used to condition the simulations. Several vertical stratigraphic columns have been extracted from the three-dimensional stochastic models for use in simplified total-system performance assessment exercises. Simple, manual adjustments are required to eliminate the more obvious simulation artifacts and to impose a secondary set of deterministic geologic features on the overall stratigraphic framework provided by the indictor models
STOCHASTIC MODELING OF INFLATION IN NIGERIA
African Journals Online (AJOL)
2006-04-04
Apr 4, 2006 ... In this paper, we adopt a time series approach in modeling inflation in ... KEYWORDS: Buys-Ballot table; Quadratic trend; Seasonal multiplicative model, ..... Basic Statistics for Business and Economics, 2nd ed, Irwin, USA.
Stochastic population dynamic models as probability networks
M.E. and D.C. Lee. Borsuk
2009-01-01
The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...
Stochastic programming framework for Lithuanian pension payout modelling
Directory of Open Access Journals (Sweden)
Audrius Kabašinskas
2014-12-01
Full Text Available The paper provides a scientific approach to the problem of selecting a pension fund by taking into account some specific characteristics of the Lithuanian Republic (LR pension accumulation system. The decision making model, which can be used to plan a long-term pension accrual of the Lithuanian Republic (LR citizens, in an optimal way is presented. This model focuses on factors that influence the sustainability of the pension system selection under macroeconomic, social and demographic uncertainty. The model is formalized as a single stage stochastic optimization problem where the long-term optimal strategy can be obtained based on the possible scenarios generated for a particular participant. Stochastic programming methods allow including the pension fund rebalancing moment and direction of investment, and taking into account possible changes of personal income, changes of society and the global financial market. The collection of methods used to generate scenario trees was found useful to solve strategic planning problems.
Stochastic resonance in a generalized Von Foerster population growth model
Energy Technology Data Exchange (ETDEWEB)
Lumi, N.; Mankin, R. [Institute of Mathematics and Natural Sciences, Tallinn University, 25 Narva Road, 10120 Tallinn (Estonia)
2014-11-12
The stochastic dynamics of a population growth model, similar to the Von Foerster model for human population, is studied. The influence of fluctuating environment on the carrying capacity is modeled as a multiplicative dichotomous noise. It is established that an interplay between nonlinearity and environmental fluctuations can cause single unidirectional discontinuous transitions of the mean population size versus the noise amplitude, i.e., an increase of noise amplitude can induce a jump from a state with a moderate number of individuals to that with a very large number, while by decreasing the noise amplitude an opposite transition cannot be effected. An analytical expression of the mean escape time for such transitions is found. Particularly, it is shown that the mean transition time exhibits a strong minimum at intermediate values of noise correlation time, i.e., the phenomenon of stochastic resonance occurs. Applications of the results in ecology are also discussed.
Stochastic modeling of central apnea events in preterm infants
International Nuclear Information System (INIS)
Clark, Matthew T; Lake, Douglas E; Randall Moorman, J; Delos, John B; Lee, Hoshik; Fairchild, Karen D; Kattwinkel, John
2016-01-01
A near-ubiquitous pathology in very low birth weight infants is neonatal apnea, breathing pauses with slowing of the heart and falling blood oxygen. Events of substantial duration occasionally occur after an infant is discharged from the neonatal intensive care unit (NICU). It is not known whether apneas result from a predictable process or from a stochastic process, but the observation that they occur in seemingly random clusters justifies the use of stochastic models. We use a hidden-Markov model to analyze the distribution of durations of apneas and the distribution of times between apneas. The model suggests the presence of four breathing states, ranging from very stable (with an average lifetime of 12 h) to very unstable (with an average lifetime of 10 s). Although the states themselves are not visible, the mathematical analysis gives estimates of the transition rates among these states. We have obtained these transition rates, and shown how they change with post-menstrual age; as expected, the residence time in the more stable breathing states increases with age. We also extrapolated the model to predict the frequency of very prolonged apnea during the first year of life. This paradigm—stochastic modeling of cardiorespiratory control in neonatal infants to estimate risk for severe clinical events—may be a first step toward personalized risk assessment for life threatening apnea events after NICU discharge. (paper)
A data driven nonlinear stochastic model for blood glucose dynamics.
Zhang, Yan; Holt, Tim A; Khovanova, Natalia
2016-03-01
The development of adequate mathematical models for blood glucose dynamics may improve early diagnosis and control of diabetes mellitus (DM). We have developed a stochastic nonlinear second order differential equation to describe the response of blood glucose concentration to food intake using continuous glucose monitoring (CGM) data. A variational Bayesian learning scheme was applied to define the number and values of the system's parameters by iterative optimisation of free energy. The model has the minimal order and number of parameters to successfully describe blood glucose dynamics in people with and without DM. The model accounts for the nonlinearity and stochasticity of the underlying glucose-insulin dynamic process. Being data-driven, it takes full advantage of available CGM data and, at the same time, reflects the intrinsic characteristics of the glucose-insulin system without detailed knowledge of the physiological mechanisms. We have shown that the dynamics of some postprandial blood glucose excursions can be described by a reduced (linear) model, previously seen in the literature. A comprehensive analysis demonstrates that deterministic system parameters belong to different ranges for diabetes and controls. Implications for clinical practice are discussed. This is the first study introducing a continuous data-driven nonlinear stochastic model capable of describing both DM and non-DM profiles. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Spatial stochastic regression modelling of urban land use
International Nuclear Information System (INIS)
Arshad, S H M; Jaafar, J; Abiden, M Z Z; Latif, Z A; Rasam, A R A
2014-01-01
Urbanization is very closely linked to industrialization, commercialization or overall economic growth and development. This results in innumerable benefits of the quantity and quality of the urban environment and lifestyle but on the other hand contributes to unbounded development, urban sprawl, overcrowding and decreasing standard of living. Regulation and observation of urban development activities is crucial. The understanding of urban systems that promotes urban growth are also essential for the purpose of policy making, formulating development strategies as well as development plan preparation. This study aims to compare two different stochastic regression modeling techniques for spatial structure models of urban growth in the same specific study area. Both techniques will utilize the same datasets and their results will be analyzed. The work starts by producing an urban growth model by using stochastic regression modeling techniques namely the Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR). The two techniques are compared to and it is found that, GWR seems to be a more significant stochastic regression model compared to OLS, it gives a smaller AICc (Akaike's Information Corrected Criterion) value and its output is more spatially explainable
Stochastic modeling of central apnea events in preterm infants.
Clark, Matthew T; Delos, John B; Lake, Douglas E; Lee, Hoshik; Fairchild, Karen D; Kattwinkel, John; Moorman, J Randall
2016-04-01
A near-ubiquitous pathology in very low birth weight infants is neonatal apnea, breathing pauses with slowing of the heart and falling blood oxygen. Events of substantial duration occasionally occur after an infant is discharged from the neonatal intensive care unit (NICU). It is not known whether apneas result from a predictable process or from a stochastic process, but the observation that they occur in seemingly random clusters justifies the use of stochastic models. We use a hidden-Markov model to analyze the distribution of durations of apneas and the distribution of times between apneas. The model suggests the presence of four breathing states, ranging from very stable (with an average lifetime of 12 h) to very unstable (with an average lifetime of 10 s). Although the states themselves are not visible, the mathematical analysis gives estimates of the transition rates among these states. We have obtained these transition rates, and shown how they change with post-menstrual age; as expected, the residence time in the more stable breathing states increases with age. We also extrapolated the model to predict the frequency of very prolonged apnea during the first year of life. This paradigm-stochastic modeling of cardiorespiratory control in neonatal infants to estimate risk for severe clinical events-may be a first step toward personalized risk assessment for life threatening apnea events after NICU discharge.
Performance modeling, stochastic networks, and statistical multiplexing
Mazumdar, Ravi R
2013-01-01
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
Directory of Open Access Journals (Sweden)
Shelton Peiris
2017-12-01
Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.
Stochastic quantization of a topological quantum mechanical model
International Nuclear Information System (INIS)
Antunes, Sergio; Krein, Gastao; Menezes, Gabriel; Svaiter, Nami Fux
2011-01-01
Full text: Stochastic quantization of complex actions has been extensively studied in the literature. In these models, a Markovian Langevin equation is used in order to study the quantization of such systems. In such papers, the advantages of the Markovian stochastic quantization method were explored and exposed. However, many drawbacks of the method were also pointed out, such as instability of the simulations with absence of convergence and sometimes convergence to the wrong limit. Indeed, although several alternative methods have been proposed to deal with interesting physical systems where the action is complex, these approaches do not suggest any general way of solving the particular difficulties that arise in each situation. Here, we wish to make contributions to the program of stochastic quantization of theories with imaginary action by investigating the consequences of a non-Markovian stochastic quantization in a particular situation, namely a quantum mechanical topological action. We analyze the Markovian stochastic quantization for a topological quantum mechanical action which is analog to a Maxwell-Chern-Simons action in the Weyl gauge. Afterwards we consider a Langevin equation with memory kernel and Einstein's relations with colored noise. We show that convergence towards equilibrium is achieved in both regimes. We also sketch a simple numerical analysis to investigate the possible advantages of non-Markovian procedure over the usual Markovian quantization. Both retarded Green's function for the diffusion problem are considered in such analysis. We show that, although the results indicated that the effect of memory kernel, as usually expected, is to delay the convergence to equilibrium, non-Markovian systems imply a faster decay compared to Markovian ones as well as smoother convergence to equilibrium. (author)
Introduction to stochastic models in biology
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Samson, Adeline
2013-01-01
This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential equations (ODEs). These models assume that the observed dynamics are driven exclusively by internal, deterministic mechanisms. However, real biological systems will always be exp...
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...
Stochastic modelling in design of mechanical properties of nanometals
International Nuclear Information System (INIS)
Tengen, T.B.; Wejrzanowski, T.; Iwankiewicz, R.; Kurzydlowski, K.J.
2010-01-01
Polycrystalline nanometals are being fabricated through different processing routes and conditions. The consequence is that nanometals having the same mean grain size may have different grain size dispersion and, hence, may have different material properties. This has often led to conflicting reports from both theoretical and experimental findings about the evolutions of the mechanical properties of nanomaterials. The present paper employs stochastic model to study the impact of microstructure evolution during grain growth on the mechanical properties of polycrystalline nanometals. The stochastic model for grain growth and the stochastic model for changes in mechanical properties of nanomaterials are proposed. The model for the mechanical properties developed is tested on aluminium samples.Many salient features of the mechanical properties of the aluminium samples are revealed. The results show that the different mechanisms of grain growth impart different nature of response to the material mechanical properties. The conventional, homologous and anomalous temperature dependences of the yield stress have also been revealed to be due to different nature of interactions of the microstructures during evolution.
Bayesian inference for hybrid discrete-continuous stochastic kinetic models
International Nuclear Information System (INIS)
Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S
2014-01-01
We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)
TIME-DEPENDENT STOCHASTIC ACCELERATION MODEL FOR FERMI BUBBLES
Energy Technology Data Exchange (ETDEWEB)
Sasaki, Kento; Asano, Katsuaki; Terasawa, Toshio, E-mail: kentos@icrr.u-tokyo.ac.jp, E-mail: asanok@icrr.u-tokyo.ac.jp, E-mail: terasawa@icrr.u-tokyo.ac.jp [Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582 (Japan)
2015-12-01
We study stochastic acceleration models for the Fermi bubbles. Turbulence is excited just behind the shock front via Kelvin–Helmholtz, Rayleigh–Taylor, or Richtmyer–Meshkov instabilities, and plasma particles are continuously accelerated by the interaction with the turbulence. The turbulence gradually decays as it goes away from the shock fronts. Adopting a phenomenological model for the stochastic acceleration, we explicitly solve the temporal evolution of the particle energy distribution in the turbulence. Our results show that the spatial distribution of high-energy particles is different from those for a steady solution. We also show that the contribution of electrons that escaped from the acceleration regions significantly softens the photon spectrum. The photon spectrum and surface brightness profile are reproduced by our models. If the escape efficiency is very high, the radio flux from the escaped low-energy electrons can be comparable to that of the WMAP haze. We also demonstrate hadronic models with the stochastic acceleration, but they are unlikely in the viewpoint of the energy budget.
Stochastic models of solute transport in highly heterogeneous geologic media
Energy Technology Data Exchange (ETDEWEB)
Semenov, V.N.; Korotkin, I.A.; Pruess, K.; Goloviznin, V.M.; Sorokovikova, O.S.
2009-09-15
A stochastic model of anomalous diffusion was developed in which transport occurs by random motion of Brownian particles, described by distribution functions of random displacements with heavy (power-law) tails. One variant of an effective algorithm for random function generation with a power-law asymptotic and arbitrary factor of asymmetry is proposed that is based on the Gnedenko-Levy limit theorem and makes it possible to reproduce all known Levy {alpha}-stable fractal processes. A two-dimensional stochastic random walk algorithm has been developed that approximates anomalous diffusion with streamline-dependent and space-dependent parameters. The motivation for introducing such a type of dispersion model is the observed fact that tracers in natural aquifers spread at different super-Fickian rates in different directions. For this and other important cases, stochastic random walk models are the only known way to solve the so-called multiscaling fractional order diffusion equation with space-dependent parameters. Some comparisons of model results and field experiments are presented.
A Stochastic Operational Planning Model for Smart Power Systems
Directory of Open Access Journals (Sweden)
Sh. Jadid
2014-12-01
Full Text Available Smart Grids are result of utilizing novel technologies such as distributed energy resources, and communication technologies in power system to compensate some of its defects. Various power resources provide some benefits for operation domain however, power system operator should use a powerful methodology to manage them. Renewable resources and load add uncertainty to the problem. So, independent system operator should use a stochastic method to manage them. A Stochastic unit commitment is presented in this paper to schedule various power resources such as distributed generation units, conventional thermal generation units, wind and PV farms, and demand response resources. Demand response resources, interruptible loads, distributed generation units, and conventional thermal generation units are used to provide required reserve for compensating stochastic nature of various resources and loads. In the presented model, resources connected to distribution network can participate in wholesale market through aggregators. Moreover, a novel three-program model which can be used by aggregators is presented in this article. Loads and distributed generation can contract with aggregators by these programs. A three-bus test system and the IEEE RTS are used to illustrate usefulness of the presented model. The results show that ISO can manage the system effectively by using this model
Stochastic population and epidemic models persistence and extinction
Allen, Linda J S
2015-01-01
This monograph provides a summary of the basic theory of branching processes for single-type and multi-type processes. Classic examples of population and epidemic models illustrate the probability of population or epidemic extinction obtained from the theory of branching processes. The first chapter develops the branching process theory, while in the second chapter two applications to population and epidemic processes of single-type branching process theory are explored. The last two chapters present multi-type branching process applications to epidemic models, and then continuous-time and continuous-state branching processes with applications. In addition, several MATLAB programs for simulating stochastic sample paths are provided in an Appendix. These notes originated as part of a lecture series on Stochastics in Biological Systems at the Mathematical Biosciences Institute in Ohio, USA. Professor Linda Allen is a Paul Whitfield Horn Professor of Mathematics in the Department of Mathematics and Statistics ...
Modeling and stochastic analysis of dynamic mechanisms of the perception
Pisarchik, A.; Bashkirtseva, I.; Ryashko, L.
2017-10-01
Modern studies in physiology and cognitive neuroscience consider a noise as an important constructive factor of the brain functionality. Under the adequate noise, the brain can rapidly access different ordered states, and provide decision-making by preventing deadlocks. Bistable dynamic models are often used for the study of the underlying mechanisms of the visual perception. In the present paper, we consider a bistable energy model subject to both additive and parametric noise. Using the catastrophe theory formalism and stochastic sensitivity functions technique, we analyze a response of the equilibria to noise, and study noise-induced transitions between equilibria. We demonstrate and analyse the effect of hysteresis squeezing when the intensity of noise is increased. Stochastic bifurcations connected with the suppression of oscillations by parametric noises are discussed.
Stochastic modeling of reinforced concrete structures exposed to chloride attack
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Frier, Christian
2004-01-01
For many reinforced concrete structures corrosion of reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation is that the chloride content around...... concentration and reinforcement cover depth are modeled by stochastic fields. The paper contains a description of the parameters to be included in a stochastic model and a proposal for the information needed to obtain values for the parameters in order to be able to perform reliability investigations....... The distribution of the time to initiation of corrosion is estimated by simulation. As an example a bridge pier in a marine environment is considered....
Stochastic Modeling of Reinforced Concrete Structures Exposed to Chloride Attack
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Frier, Christian
2003-01-01
For many reinforced concrete structures corrosion of reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation is that the chloride content around...... concentration and reinforcement cover depth are modeled by stochastic fields. The paper contains a description of the parameters to be included in a stochastic model and a proposal for the information needed to obtain values for the parameters in order to be ab le to perform reliability investigations....... The distribution of the time to initiation of corrosion is estimated by simulation. As an example a bridge pier in a marine environment is considered....
Stochastic Model for Population Exposed to Low Level Risk
International Nuclear Information System (INIS)
Merkle, J.M.
1996-01-01
In this paper the stochastic model for population size, i.e. calculation of the number of deaths due to lethal stochastic health effects caused by the exposure to low level ionising radiation is presented. The model is defined for subpopulation with parameter (a, b) being fixed. Using the corresponding density function, it is possible to find all the quantities of interest by averaging over whole possible values for (a, l). All processes ar at first defined for one radionuclide, exposure pathway and the health effect under consideration. The results obtained in this paper are the basic quantities in the risk assessment, loss of life expectancy etc. The results presented in this paper are also applicable to the other sources of low level risk, not only the radiation risk
Stochastic modeling of the hypothalamic pulse generator activity.
Camproux, A C; Thalabard, J C; Thomas, G
1994-11-01
Luteinizing hormone (LH) is released by the pituitary in discrete pulses. In the monkey, the appearance of LH pulses in the plasma is invariably associated with sharp increases (i.e, volleys) in the frequency of the hypothalamic pulse generator electrical activity, so that continuous monitoring of this activity by telemetry provides a unique means to study the temporal structure of the mechanism generating the pulses. To assess whether the times of occurrence and durations of previous volleys exert significant influence on the timing of the next volley, we used a class of periodic counting process models that specify the stochastic intensity of the process as the product of two factors: 1) a periodic baseline intensity and 2) a stochastic regression function with covariates representing the influence of the past. This approach allows the characterization of circadian modulation and memory range of the process underlying hypothalamic pulse generator activity, as illustrated by fitting the model to experimental data from two ovariectomized rhesus monkeys.
On a Versatile Stochastic Growth Model
Directory of Open Access Journals (Sweden)
Samiur Arif
2012-06-01
Full Text Available Growth phenomena are ubiquitous and pervasive not only in biology and the medical sciences, but also in economics, marketing and the computer and social sciences. We introduce a three-parameter version of the classic pure-birth process growth model when suitably instantiated, can be used to model growth phenomena in many seemingly unrelated application domains. We point out that the model is computationally attractive since it admits of conceptually simple, closed form solutions for the time-dependent probabilities.
An extension of clarke's model with stochastic amplitude flip processes
Hoel, Hakon
2014-07-01
Stochastic modeling is an essential tool for studying statistical properties of wireless channels. In multipath fading channel (MFC) models, the signal reception is modeled by a sum of wave path contributions, and Clarke\\'s model is an important example of such which has been widely accepted in many wireless applications. However, since Clarke\\'s model is temporally deterministic, Feng and Field noted that it does not model real wireless channels with time-varying randomness well. Here, we extend Clarke\\'s model to a novel time-varying stochastic MFC model with scatterers randomly flipping on and off. Statistical properties of the MFC model are analyzed and shown to fit well with real signal measurements, and a limit Gaussian process is derived from the model when the number of active wave paths tends to infinity. A second focus of this work is a comparison study of the error and computational cost of generating signal realizations from the MFC model and from its limit Gaussian process. By rigorous analysis and numerical studies, we show that in many settings, signal realizations are generated more efficiently by Gaussian process algorithms than by the MFC model\\'s algorithm. Numerical examples that strengthen these observations are also presented. © 2014 IEEE.
Stochastic Models of Molecule Formation on Dust
Charnley, Steven; Wirstroem, Eva
2011-01-01
We will present new theoretical models for the formation of molecules on dust. The growth of ice mantles and their layered structure is accounted for and compared directly to observations through simulation of the expected ice absorption spectra
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Stochastic Multiscale Modeling of Polycrystalline Materials
2013-01-01
The single-grid strategy is adopted. The crystal visco-plastic constitutive model proposed in [7] along with a Voce type hardening model described...in [97] is used with γ̇0 = 1s−1 and m = 0.1. The parameters in the Voce type hardening law are selected according to [97]: κ0 = 47.0MPa, κ1 = 86.0MPa
Inference of a Nonlinear Stochastic Model of the Cardiorespiratory Interaction
Smelyanskiy, V. N.; Luchinsky, D. G.; Stefanovska, A.; McClintock, P. V.
2005-03-01
We reconstruct a nonlinear stochastic model of the cardiorespiratory interaction in terms of a set of polynomial basis functions representing the nonlinear force governing system oscillations. The strength and direction of coupling and noise intensity are simultaneously inferred from a univariate blood pressure signal. Our new inference technique does not require extensive global optimization, and it is applicable to a wide range of complex dynamical systems subject to noise.
Stochastic Model Predictive Control with Applications in Smart Energy Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Edlund, Kristian; Mølbak, Tommy
2012-01-01
to cover more than 50% of the total consumption by 2050. Energy systems based on significant amounts of renewable energy sources are subject to uncertainties. To accommodate the need for model predictive control (MPC) of such systems, the effect of the stochastic effects on the constraints must...... study, we consider a system consisting of fuel-fired thermal power plants, wind farms and electric vehicles....
Stochastic model of Rayleigh-Taylor turbulent mixing
International Nuclear Information System (INIS)
Abarzhi, S.I.; Cadjan, M.; Fedotov, S.
2007-01-01
We propose a stochastic model to describe the random character of the dissipation process in Rayleigh-Taylor turbulent mixing. The parameter alpha, used conventionally to characterize the mixing growth-rate, is not a universal constant and is very sensitive to the statistical properties of the dissipation. The ratio between the rates of momentum loss and momentum gain is the statistic invariant and a robust parameter to diagnose with or without turbulent diffusion accounted for
Dynamic analysis of a stochastic delayed rumor propagation model
Jia, Fangju; Lv, Guangying; Wang, Shuangfeng; Zou, Guang-an
2018-02-01
The rapid development of the Internet, especially the emergence of the social networks, has led rumor propagation into a new media era. In this paper, we are concerned with a stochastic delayed rumor propagation model. Firstly, we obtain the existence of the global solution. Secondly, sufficient conditions for extinction of the rumor are established. Lastly, the boundedness of solution is proved and some simulations are given to verify our results.
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Five challenges for stochastic epidemic models involving global transmission
Directory of Open Access Journals (Sweden)
Tom Britton
2015-03-01
Full Text Available The most basic stochastic epidemic models are those involving global transmission, meaning that infection rates depend only on the type and state of the individuals involved, and not on their location in the population. Simple as they are, there are still several open problems for such models. For example, when will such an epidemic go extinct and with what probability (questions depending on the population being fixed, changing or growing? How can a model be defined explaining the sometimes observed scenario of frequent mid-sized epidemic outbreaks? How can evolution of the infectious agent transmission rates be modelled and fitted to data in a robust way?
Extinction in neutrally stable stochastic Lotka-Volterra models
Dobrinevski, Alexander; Frey, Erwin
2012-05-01
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
Dynamic-stochastic modeling of snow cover formation on the European territory of Russia
Directory of Open Access Journals (Sweden)
A. N. Gelfan
2014-01-01
Full Text Available A dynamic-stochastic model, which combines a deterministic model of snow cover formation with a stochastic weather generator, has been developed. The deterministic snow model describes temporal change of the snow depth, content of ice and liquid water, snow density, snowmelt, sublimation, re-freezing of melt water, and snow metamorphism. The model has been calibrated and validated against the long-term data of snow measurements over the territory of the European Russia. The model showed good performance in simulating time series of the snow water equivalent and snow depth. The developed weather generator (NEsted Weather Generator, NewGen includes nested generators of annual, monthly and daily time series of weather variables (namely, precipitation, air temperature, and air humidity. The parameters of the NewGen have been adjusted through calibration against the long-term meteorological data in the European Russia. A disaggregation procedure has been proposed for transforming parameters of the annual weather generator into the parameters of the monthly one and, subsequently, into the parameters of the daily generator. Multi-year time series of the simulated daily weather variables have been used as an input to the snow model. Probability properties of the snow cover, such as snow water equivalent and snow depth for return periods of 25 and 100 years, have been estimated against the observed data, showing good correlation coefficients. The described model has been applied to different landscapes of European Russia, from steppe to taiga regions, to show the robustness of the proposed technique.
On a Stochastic Model in Insurance
Indian Academy of Sciences (India)
Insurance mathematics today is considered a part of applied probability theory. Main objectives are modelling of claims that arrive in an insurance business, and decide how premiums are to be charged to avoid ruin of the insurance company. GENERAL I ARTICLE various results and the heuristics can be appreciated.
Influence of rainfall observation network on model calibration and application
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-01-01
Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as
Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra
2016-02-01
In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2016-04-01
Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.
Stochastic Modelling of Wireless Energy Transfer
Veilleux, Shaun; Almaghasilah, Ahmed; Abedi, Ali; Wilkerson, DeLisa
2017-01-01
This study investigates the efficiency of a new method of powering remote sensors by the means of wireless energy transfer. The increased use of sensors for data collection comes with the inherent cost of supplying power from sources such as power cables or batteries. Wireless energy transfer technology eliminates the need for power cables or periodic battery replacement. The time and cost of setting up or expanding a sensor network will be reduced while allowing sensors to be placed in areas where running power cables or battery replacement is not feasible. This paper models wireless channels for power and data separately. Smart scheduling for the data channel is proposed to avoid transmitting data on a noisy channel where the probability of data loss is high to improve power efficiency. Analytical models have been developed and verified using simulations.
Stochastic modeling of a serial killer.
Simkin, M V; Roychowdhury, V P
2014-08-21
We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Neural network connectivity and response latency modelled by stochastic processes
DEFF Research Database (Denmark)
Tamborrino, Massimiliano
is connected to thousands of other neurons. The rst question is: how to model neural networks through stochastic processes? A multivariate Ornstein-Uhlenbeck process, obtained as a diffusion approximation of a jump process, is the proposed answer. Obviously, dependencies between neurons imply dependencies......Stochastic processes and their rst passage times have been widely used to describe the membrane potential dynamics of single neurons and to reproduce neuronal spikes, respectively.However, cerebral cortex in human brains is estimated to contain 10-20 billions of neurons and each of them...... between their spike times. Therefore, the second question is: how to detect neural network connectivity from simultaneously recorded spike trains? Answering this question corresponds to investigate the joint distribution of sequences of rst passage times. A non-parametric method based on copulas...
Environmental versus demographic variability in stochastic predator–prey models
International Nuclear Information System (INIS)
Dobramysl, U; Täuber, U C
2013-01-01
In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)
Classical and quantum stochastic models of resistive and memristive circuits
Gough, John E.; Zhang, Guofeng
2017-07-01
The purpose of this paper is to examine stochastic Markovian models for circuits in phase space for which the drift term is equivalent to the standard circuit equations. In particular, we include dissipative components corresponding to both a resistor and a memristor in series. We obtain a dilation of the problem which is canonical in the sense that the underlying Poisson bracket structure is preserved under the stochastic flow. We do this first of all for standard Wiener noise but also treat the problem using a new concept of symplectic noise, where the Poisson structure is extended to the noise as well as the circuit variables, and in particular where we have canonically conjugate noises. Finally, we construct a dilation which describes the quantum mechanical analogue.
Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras
Energy Technology Data Exchange (ETDEWEB)
Dobay, M. P. D., E-mail: maria.pamela.david@physik.uni-muenchen.de; Alberola, A. Piera; Mendoza, E. R.; Raedler, J. O., E-mail: joachim.raedler@physik.uni-muenchen.de [Ludwig-Maximilians University, Faculty of Physics, Center for NanoScience (Germany)
2012-03-15
Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.
Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras
International Nuclear Information System (INIS)
Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.
2012-01-01
Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.
Modeling nanoparticle uptake and intracellular distribution using stochastic process algebras
Dobay, M. P. D.; Alberola, A. Piera; Mendoza, E. R.; Rädler, J. O.
2012-03-01
Computational modeling is increasingly important to help understand the interaction and movement of nanoparticles (NPs) within living cells, and to come to terms with the wealth of data that microscopy imaging yields. A quantitative description of the spatio-temporal distribution of NPs inside cells; however, it is challenging due to the complexity of multiple compartments such as endosomes and nuclei, which themselves are dynamic and can undergo fusion and fission and exchange their content. Here, we show that stochastic pi calculus, a widely-used process algebra, is well suited for mapping surface and intracellular NP interactions and distributions. In stochastic pi calculus, each NP is represented as a process, which can adopt various states such as bound or aggregated, as well as be passed between processes representing location, as a function of predefined stochastic channels. We created a pi calculus model of gold NP uptake and intracellular movement and compared the evolution of surface-bound, cytosolic, endosomal, and nuclear NP densities with electron microscopy data. We demonstrate that the computational approach can be extended to include specific molecular binding and potential interaction with signaling cascades as characteristic for NP-cell interactions in a wide range of applications such as nanotoxicity, viral infection, and drug delivery.
Uncertainty quantification and stochastic modeling with Matlab
Souza de Cursi, Eduardo
2015-01-01
Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no
Stochastic Parametrisations and Regime Behaviour of Atmospheric Models
Arnold, Hannah; Moroz, Irene; Palmer, Tim
2013-04-01
The presence of regimes is a characteristic of non-linear, chaotic systems (Lorenz, 2006). In the atmosphere, regimes emerge as familiar circulation patterns such as the El-Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and Scandinavian Blocking events. In recent years there has been much interest in the problem of identifying and studying atmospheric regimes (Solomon et al, 2007). In particular, how do these regimes respond to an external forcing such as anthropogenic greenhouse gas emissions? The importance of regimes in observed trends over the past 50-100 years indicates that in order to predict anthropogenic climate change, our climate models must be able to represent accurately natural circulation regimes, their statistics and variability. It is well established that representing model uncertainty as well as initial condition uncertainty is important for reliable weather forecasts (Palmer, 2001). In particular, stochastic parametrisation schemes have been shown to improve the skill of weather forecast models (e.g. Berner et al., 2009; Frenkel et al., 2012; Palmer et al., 2009). It is possible that including stochastic physics as a representation of model uncertainty could also be beneficial in climate modelling, enabling the simulator to explore larger regions of the climate attractor including other flow regimes. An alternative representation of model uncertainty is a perturbed parameter scheme, whereby physical parameters in subgrid parametrisation schemes are perturbed about their optimal value. Perturbing parameters gives a greater control over the ensemble than multi-model or multiparametrisation ensembles, and has been used as a representation of model uncertainty in climate prediction (Stainforth et al., 2005; Rougier et al., 2009). We investigate the effect of including representations of model uncertainty on the regime behaviour of a simulator. A simple chaotic model of the atmosphere, the Lorenz '96 system, is used to study
On the abelianity of the stochastic sandpile model
Nunzi, François
2016-01-01
We consider a stochastic variant of the Abelian Sandpile Model (ASM) on a finite graph, introduced by Chan, Marckert and Selig. Even though it is a more general model, some nice properties still hold. We show that on a certain probability space, even if we lose the group structure due to topplings not being deterministic, some operators still commute. As a corollary, we show that the stationary distribution still does not depend on how sand grains are added onto the graph in our model, answer...
A single model procedure for estimating tank calibration equations
International Nuclear Information System (INIS)
Liebetrau, A.M.
1997-10-01
A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes
SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin
The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.
Test models for improving filtering with model errors through stochastic parameter estimation
International Nuclear Information System (INIS)
Gershgorin, B.; Harlim, J.; Majda, A.J.
2010-01-01
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
Two new algorithms to combine kriging with stochastic modelling
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually
STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS
Energy Technology Data Exchange (ETDEWEB)
Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)
2015-09-15
We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.
The stochastic resonance for the incidence function model of metapopulation
Li, Jiang-Cheng; Dong, Zhi-Wei; Zhou, Ruo-Wei; Li, Yun-Xian; Qian, Zhen-Wei
2017-06-01
A stochastic model with endogenous and exogenous periodicities is proposed in this paper on the basis of metapopulation dynamics to model the crop yield losses due to pests and diseases. The rationale is that crop yield losses occur because the physiology of the growing crop is negatively affected by pests and diseases in a dynamic way over time as crop both grows and develops. Metapopulation dynamics can thus be used to model the resultant crop yield losses. The stochastic metapopulation process is described by using the Simplified Incidence Function model (IFM). Compared to the original IFMs, endogenous and exogenous periodicities are considered in the proposed model to handle the cyclical patterns observed in pest infestations, diseases epidemics, and exogenous affecting factors such as temperature and rainfalls. Agricultural loss data in China are used to fit the proposed model. Experimental results demonstrate that: (1) Model with endogenous and exogenous periodicities is a better fit; (2) When the internal system fluctuations and external environmental fluctuations are negatively correlated, EIL or the cost of loss is monotonically increasing; when the internal system fluctuations and external environmental fluctuations are positively correlated, an outbreak of pests and diseases might occur; (3) If the internal system fluctuations and external environmental fluctuations are positively correlated, an optimal patch size can be identified which will greatly weaken the effects of external environmental influence and hence inhibit pest infestations and disease epidemics.
Optimizing ZigBee Security using Stochastic Model Checking
DEFF Research Database (Denmark)
Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming
, we identify an important gap in the specification on key updates, and present a methodology for determining optimal key update policies and security parameters. We exploit the stochastic model checking approach using the probabilistic model checker PRISM, and assess the security needs for realistic......ZigBee is a fairly new but promising wireless sensor network standard that offers the advantages of simple and low resource communication. Nevertheless, security is of great concern to ZigBee, and enhancements are prescribed in the latest ZigBee specication: ZigBee-2007. In this technical report...
Green function simulation of Hamiltonian lattice models with stochastic reconfiguration
International Nuclear Information System (INIS)
Beccaria, M.
2000-01-01
We apply a recently proposed Green function Monte Carlo procedure to the study of Hamiltonian lattice gauge theories. This class of algorithms computes quantum vacuum expectation values by averaging over a set of suitable weighted random walkers. By means of a procedure called stochastic reconfiguration the long standing problem of keeping fixed the walker population without a priori knowledge of the ground state is completely solved. In the U(1) 2 model, which we choose as our theoretical laboratory, we evaluate the mean plaquette and the vacuum energy per plaquette. We find good agreement with previous works using model-dependent guiding functions for the random walkers. (orig.)
Stochastic series expansion simulation of the t -V model
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
ARMA modelling of neutron stochastic processes with large measurement noise
International Nuclear Information System (INIS)
Zavaljevski, N.; Kostic, Lj.; Pesic, M.
1994-01-01
An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)
Dynamic analysis of a stochastic rumor propagation model
Jia, Fangju; Lv, Guangying
2018-01-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. In this paper, we are concerned with a stochastic rumor propagation model. Sufficient conditions for extinction and persistence in the mean of the rumor are established. The threshold between persistence in the mean and extinction of the rumor is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.
Calibration of the Site-Scale Saturated Zone Flow Model
International Nuclear Information System (INIS)
Zyvoloski, G. A.
2001-01-01
The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Energy Technology Data Exchange (ETDEWEB)
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Hand-eye calibration using a target registration error model.
Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M
2017-10-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
A stochastic phase-field model determined from molecular dynamics
von Schwerin, Erik; Szepessy, Anders
2010-01-01
The dynamics of dendritic growth of a crystal in an undercooled melt is determined by macroscopic diffusion-convection of heat and by capillary forces acting on the nanometer scale of the solid-liquid interface width. Its modelling is useful for instance in processing techniques based on casting. The phase-field method is widely used to study evolution of such microstructural phase transformations on a continuum level; it couples the energy equation to a phenomenological Allen-Cahn/Ginzburg-Landau equation modelling the dynamics of an order parameter determining the solid and liquid phases, including also stochastic fluctuations to obtain the qualitatively correct result of dendritic side branching. This work presents a method to determine stochastic phase-field models from atomistic formulations by coarse-graining molecular dynamics. It has three steps: (1) a precise quantitative atomistic definition of the phase-field variable, based on the local potential energy; (2) derivation of its coarse-grained dynamics model, from microscopic Smoluchowski molecular dynamics (that is Brownian or over damped Langevin dynamics); and (3) numerical computation of the coarse-grained model functions. The coarse-grained model approximates Gibbs ensemble averages of the atomistic phase-field, by choosing coarse-grained drift and diffusion functions that minimize the approximation error of observables in this ensemble average. © EDP Sciences, SMAI, 2010.
A stochastic phase-field model determined from molecular dynamics
von Schwerin, Erik
2010-03-17
The dynamics of dendritic growth of a crystal in an undercooled melt is determined by macroscopic diffusion-convection of heat and by capillary forces acting on the nanometer scale of the solid-liquid interface width. Its modelling is useful for instance in processing techniques based on casting. The phase-field method is widely used to study evolution of such microstructural phase transformations on a continuum level; it couples the energy equation to a phenomenological Allen-Cahn/Ginzburg-Landau equation modelling the dynamics of an order parameter determining the solid and liquid phases, including also stochastic fluctuations to obtain the qualitatively correct result of dendritic side branching. This work presents a method to determine stochastic phase-field models from atomistic formulations by coarse-graining molecular dynamics. It has three steps: (1) a precise quantitative atomistic definition of the phase-field variable, based on the local potential energy; (2) derivation of its coarse-grained dynamics model, from microscopic Smoluchowski molecular dynamics (that is Brownian or over damped Langevin dynamics); and (3) numerical computation of the coarse-grained model functions. The coarse-grained model approximates Gibbs ensemble averages of the atomistic phase-field, by choosing coarse-grained drift and diffusion functions that minimize the approximation error of observables in this ensemble average. © EDP Sciences, SMAI, 2010.
Stochastic modeling for river pollution of Sungai Perlis
Energy Technology Data Exchange (ETDEWEB)
Yunus, Nurul Izzaty Mohd.; Rahman, Haliza Abd. [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia,81310 Johor Bahru, Johor (Malaysia); Bahar, Arifah [UTM-Centre of Industrial and Applied Mathematics (UTM-CIAM) Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)
2015-02-03
River pollution has been recognized as a contributor to a wide range of health problems and disorders in human. It can pose health dangers to humans who come into contact with it, either directly or indirectly. Therefore, it is most important to measure the concentration of Biochemical Oxygen Demand (BOD) as a water quality parameter since the parameter has long been the basic means for determining the degree of water pollution in rivers. In this study, BOD is used as a parameter to estimate the water quality at Sungai Perlis. It has been observed that Sungai Perlis is polluted due to lack of management and improper use of resources. Therefore, it is of importance to model the Sungai Perlis water quality in order to describe and predict the water quality systems. The BOD concentration secondary data set is used which was extracted from the Drainage and Irrigation Department Perlis State website. The first order differential equation from Streeter – Phelps model was utilized as a deterministic model. Then, the model was developed into a stochastic model. Results from this study shows that the stochastic model is more adequate to describe and predict the BOD concentration and the water quality systems in Sungai Perlis by having smaller value of mean squared error (MSE)
Stochastic modeling for river pollution of Sungai Perlis
International Nuclear Information System (INIS)
Yunus, Nurul Izzaty Mohd.; Rahman, Haliza Abd.; Bahar, Arifah
2015-01-01
River pollution has been recognized as a contributor to a wide range of health problems and disorders in human. It can pose health dangers to humans who come into contact with it, either directly or indirectly. Therefore, it is most important to measure the concentration of Biochemical Oxygen Demand (BOD) as a water quality parameter since the parameter has long been the basic means for determining the degree of water pollution in rivers. In this study, BOD is used as a parameter to estimate the water quality at Sungai Perlis. It has been observed that Sungai Perlis is polluted due to lack of management and improper use of resources. Therefore, it is of importance to model the Sungai Perlis water quality in order to describe and predict the water quality systems. The BOD concentration secondary data set is used which was extracted from the Drainage and Irrigation Department Perlis State website. The first order differential equation from Streeter – Phelps model was utilized as a deterministic model. Then, the model was developed into a stochastic model. Results from this study shows that the stochastic model is more adequate to describe and predict the BOD concentration and the water quality systems in Sungai Perlis by having smaller value of mean squared error (MSE)
Stochastic modelling of avascular tumour growth and therapy
International Nuclear Information System (INIS)
Sahoo, S; Sahoo, A; Shearer, S F C
2011-01-01
In this paper, a generalized stochastic model for the growth of avascular tumours is presented. This model captures the dynamical evolution of avascular tumour cell subpopulations by incorporating Gaussian white noise into the growth rate of the mitotic function. This work generalizes the deterministic model proposed by Sherratt and Chaplain (2001 J. Math. Biol. 43 291) where they formulated a tumour model in an in vivo setting, in terms of continuum densities of proliferating, quiescent and necrotic cells. Detailed simulations of our model show that the inclusion of Gaussian noise in the original model of Sherratt and Chaplain substantially distorts the overall structure of the density profiles in addition to reducing the speed of tumour growth. Within this stochastic carcinogenesis framework the action of therapy is also investigated by replacing Gaussian white noise with a therapy term. We compare a constant therapy protocol with a logarithmic time-dependent protocol. Our results predict that a logarithmic therapy is more effective than the constant therapy protocol.
A stochastic surplus production model in continuous time
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte
2017-01-01
surplus production model in continuous time (SPiCT), which in addition to stock dynamics also models the dynamics of the fisheries. This enables error in the catch process to be reflected in the uncertainty of estimated model parameters and management quantities. Benefits of the continuous-time state......Surplus production modelling has a long history as a method for managing data-limited fish stocks. Recent advancements have cast surplus production models as state-space models that separate random variability of stock dynamics from error in observed indices of biomass. We present a stochastic......-space model formulation include the ability to provide estimates of exploitable biomass and fishing mortality at any point in time from data sampled at arbitrary and possibly irregular intervals. We show in a simulation that the ability to analyse subannual data can increase the effective sample size...
Cosmic CARNage I: on the calibration of galaxy formation models
Knebe, Alexander; Pearce, Frazer R.; Gonzalez-Perez, Violeta; Thomas, Peter A.; Benson, Andrew; Asquith, Rachel; Blaizot, Jeremy; Bower, Richard; Carretero, Jorge; Castander, Francisco J.; Cattaneo, Andrea; Cora, Sofía A.; Croton, Darren J.; Cui, Weiguang; Cunnama, Daniel; Devriendt, Julien E.; Elahi, Pascal J.; Font, Andreea; Fontanot, Fabio; Gargiulo, Ignacio D.; Helly, John; Henriques, Bruno; Lee, Jaehyun; Mamon, Gary A.; Onions, Julian; Padilla, Nelson D.; Power, Chris; Pujol, Arnau; Ruiz, Andrés N.; Srisawat, Chaichalit; Stevens, Adam R. H.; Tollet, Edouard; Vega-Martínez, Cristian A.; Yi, Sukyoung K.
2018-04-01
We present a comparison of nine galaxy formation models, eight semi-analytical, and one halo occupation distribution model, run on the same underlying cold dark matter simulation (cosmological box of comoving width 125h-1 Mpc, with a dark-matter particle mass of 1.24 × 109h-1M⊙) and the same merger trees. While their free parameters have been calibrated to the same observational data sets using two approaches, they nevertheless retain some `memory' of any previous calibration that served as the starting point (especially for the manually tuned models). For the first calibration, models reproduce the observed z = 0 galaxy stellar mass function (SMF) within 3σ. The second calibration extended the observational data to include the z = 2 SMF alongside the z ˜ 0 star formation rate function, cold gas mass, and the black hole-bulge mass relation. Encapsulating the observed evolution of the SMF from z = 2 to 0 is found to be very hard within the context of the physics currently included in the models. We finally use our calibrated models to study the evolution of the stellar-to-halo mass (SHM) ratio. For all models, we find that the peak value of the SHM relation decreases with redshift. However, the trends seen for the evolution of the peak position as well as the mean scatter in the SHM relation are rather weak and strongly model dependent. Both the calibration data sets and model results are publicly available.
Cumulative error models for the tank calibration problem
International Nuclear Information System (INIS)
Goldman, A.; Anderson, L.G.; Weber, J.
1983-01-01
The purpose of a tank calibration equation is to obtain an estimate of the liquid volume that corresponds to a liquid level measurement. Calibration experimental errors occur in both liquid level and liquid volume measurements. If one of the errors is relatively small, the calibration equation can be determined from wellknown regression and calibration methods. If both variables are assumed to be in error, then for linear cases a prototype model should be considered. Many investigators are not familiar with this model or do not have computing facilities capable of obtaining numerical solutions. This paper discusses and compares three linear models that approximate the prototype model and have the advantage of much simpler computations. Comparisons among the four models and recommendations of suitability are made from simulations and from analyses of six sets of experimental data
Modelling biochemical reaction systems by stochastic differential equations with reflection.
Niu, Yuanling; Burrage, Kevin; Chen, Luonan
2016-05-07
In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Sampling from stochastic reservoir models constrained by production data
Energy Technology Data Exchange (ETDEWEB)
Hegstad, Bjoern Kaare
1997-12-31
When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.
Evapotranspiration Estimates for a Stochastic Soil-Moisture Model
Chaleeraktrakoon, Chavalit; Somsakun, Somrit
2009-03-01
Potential evapotranspiration is information that is necessary for applying a widely used stochastic model of soil moisture (I. Rodriguez Iturbe, A. Porporato, L. Ridolfi, V. Isham and D. R. Cox, Probabilistic modelling of water balance at a point: The role of climate, soil and vegetation, Proc. Roy. Soc. London A455 (1999) 3789-3805). An objective of the present paper is thus to find a proper estimate of the evapotranspiration for the stochastic model. This estimate is obtained by comparing the calculated soil-moisture distribution resulting from various techniques, such as Thornthwaite, Makkink, Jensen-Haise, FAO Modified Penman, and Blaney-Criddle, with an observed one. The comparison results using five sequences of daily soil-moisture for a dry season from November 2003 to April 2004 (Udornthani Province, Thailand) have indicated that all methods can be used if the weather information required is available. This is because their soil-moisture distributions are alike. In addition, the model is shown to have its ability in approximately describing the phenomenon at a weekly or biweekly time scale which is desirable for agricultural engineering applications.
Clustering network layers with the strata multilayer stochastic block model.
Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J
2016-01-01
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the "strata multilayer stochastic block model" (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called "strata", which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.
Testing of a one dimensional model for Field II calibration
DEFF Research Database (Denmark)
Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten
2008-01-01
Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...
Balance between calibration objectives in a conceptual hydrological model
Booij, Martijn J.; Krol, Martinus S.
2010-01-01
Three different measures to determine the optimum balance between calibration objectives are compared: the combined rank method, parameter identifiability and model validation. Four objectives (water balance, hydrograph shape, high flows, low flows) are included in each measure. The contributions of
Directory of Open Access Journals (Sweden)
Xiaona Leng
2017-06-01
Full Text Available Abstract This paper proposes a new nonlinear stochastic SIVS epidemic model with double epidemic hypothesis and Lévy jumps. The main purpose of this paper is to investigate the threshold dynamics of the stochastic SIVS epidemic model. By using the technique of a series of stochastic inequalities, we obtain sufficient conditions for the persistence in mean and extinction of the stochastic system and the threshold which governs the extinction and the spread of the epidemic diseases. Finally, this paper describes the results of numerical simulations investigating the dynamical effects of stochastic disturbance. Our results significantly improve and generalize the corresponding results in recent literatures. The developed theoretical methods and stochastic inequalities technique can be used to investigate the high-dimensional nonlinear stochastic differential systems.
Stochastic radiative transfer model for mixture of discontinuous vegetation canopies
International Nuclear Information System (INIS)
Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.
2007-01-01
Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model
Stochastic Modelling and Optimization of Complex Infrastructure Systems
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown that recent progress in stochastic modelling and optimization in combination with advanced computer systems has now made it possible to improve the design and the maintenance strategies for infrastructure systems. The paper concentrates on highway networks and single large...... bridges. united states has perhaps the largest highway networks in the world with more than 0.5 million highway bridges; see Chase, S.B. 1999. About 40% of these bridges are considered deficient and more than $50 billion is estimated needed to correct the deficiencies; see Roberts, J.E. 2001...
Elementary amplitudes from full QCD and the stochastic vacuum model
International Nuclear Information System (INIS)
Martini, A.F.; Menon, M.J.
2002-01-01
In a previous work, making use of the gluon gauge-invariant two-point correlation function determined from lattice QCD in the quenched approximation and the stochastic vacuum model, we determined the elementary (parton-parton) scattering amplitude in the momentum transfer space. In this communication we compute the elementary amplitude from new lattice QCD calculations that include the effects of dynamical fermions (full QCD). The main conclusion is that the inclusion of dynamical fermions leads to a normalized elementary amplitude that decreases more quickly with the momentum transfer than that in the quenched approximation. (author)
Investment timing decisions in a stochastic duopoly model
Energy Technology Data Exchange (ETDEWEB)
Marseguerra, Giovanni [Istituto di Econometria e CRANEC, Universita Cattolica del Sacro Cuore di Milan (Italy)]. E-mail: giovanni.marseguerra@unicatt.it; Cortelezzi, Flavia [Dipartimento di Diritto ed Economia delle Persone e delle Imprese, Universita dell' Insubria (Italy)]. E-mail: flavia.cortelezzi@uninsubria.it; Dominioni, Armando [CORE-Catholique de Louvain la Neuve (Belgium)]. E-mail: dominioni@core.ucl.ac.be
2006-08-15
We investigate the role of strategic considerations on the optimal timing of investment when firms compete for a new market (e.g., the provision of an innovative product) under demand uncertainty. Within a continuous time model of stochastic oligopoly, we show that strategic considerations are likely to be of limited impact when the new product is radically innovative whilst the fear of a rival's entry may deeply affect firms' decisions whenever innovation is to some extent limited. The welfare analysis shows surprisingly that the desirability of the different market structures considered does not depend on the fixed entry cost.
Investment timing decisions in a stochastic duopoly model
International Nuclear Information System (INIS)
Marseguerra, Giovanni; Cortelezzi, Flavia; Dominioni, Armando
2006-01-01
We investigate the role of strategic considerations on the optimal timing of investment when firms compete for a new market (e.g., the provision of an innovative product) under demand uncertainty. Within a continuous time model of stochastic oligopoly, we show that strategic considerations are likely to be of limited impact when the new product is radically innovative whilst the fear of a rival's entry may deeply affect firms' decisions whenever innovation is to some extent limited. The welfare analysis shows surprisingly that the desirability of the different market structures considered does not depend on the fixed entry cost
STOCHASTIC MODELING OF COMPRESSIVE STRENGTH OF PHOSPHORUS SLAG CONTENT CEMENT
Directory of Open Access Journals (Sweden)
Ali Allahverdi
2016-07-01
Full Text Available One of the common methods for quick determination of compressive strength as one of the most important properties for assessment of cement quality is to apply various modeling approaches. This study is aimed at finding a model for estimating the compressive strength of phosphorus slag content cements. For this purpose, the compressive strengths of chemically activated high phosphorus slag content cement prepared from phosphorus slag (80 wt.%, Portland cement (14 wt.% and a compound chemical activator containing sodium sulfate and anhydrite (6 wt.% were measured at various Blaine finenesses and curing times. Based on the obtained results, a primary stochastic model in terms of curing time and Blaine fineness has been developed. Then, another different dataset was used to incorporate composition variable including weight fractions of phosphorus slag, cement, and activator in the model. This model can be effectively used to predict the compressive strength of phosphorus slag content cements at various Blaine finenesses, curing times, and compositions.
Algorithmic detectability threshold of the stochastic block model
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
A stochastic pocket model for aluminum agglomeration in solid propellants
Energy Technology Data Exchange (ETDEWEB)
Gallier, Stany [SNPE Materiaux Energetiques, Vert le Petit (France)
2009-04-15
A new model is derived to estimate the size and fraction of aluminum agglomerates at the surface of a burning propellant. The basic idea relies on well-known pocket models in which aluminum is supposed to aggregate and melt within pocket volumes imposed by largest oxidizer particles. The proposed model essentially relaxes simple assumptions of previous pocket models on propellant structure by accounting for an actual microstructure obtained by packing. The use of statistical tools from stochastic geometry enables to determine a statistical pocket size volume and hence agglomerate diameter and agglomeration fraction. Application to several AP/Al propellants gives encouraging results that are shown to be superior to former pocket models. (Abstract Copyright [2009], Wiley Periodicals, Inc.)
Stochastic Modeling and Performance Analysis of Multimedia SoCs
DEFF Research Database (Denmark)
Raman, Balaji; Nouri, Ayoub; Gangadharan, Deepak
2013-01-01
solutions where each modeling technique has both the above mentioned characteristics. We present a probabilistic analytical framework and a statistical model checking approach to design system-on-chips for low-cost multimedia systems. We apply the modeling techniques to size the output buffer in a video......Reliability and flexibility are among the key required features of a framework used to model a system. Existing approaches to design resource-constrained, soft-real time systems either provide guarantees for output quality or account for loss in the system, but not both. We propose two independent...... decoder. The results shows that, for our stochastic design metric, the analytical framework upper bounds (and relatively accurate) compare to the statistical model checking technique. Also, we observed significant reduction in resource usage (such as output buffer size) with tolerable loss in output...
Brownian motion model with stochastic parameters for asset prices
Ching, Soo Huei; Hin, Pooi Ah
2013-09-01
The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.
Mapping of the stochastic Lotka-Volterra model to models of population genetics and game theory
Constable, George W. A.; McKane, Alan J.
2017-08-01
The relationship between the M -species stochastic Lotka-Volterra competition (SLVC) model and the M -allele Moran model of population genetics is explored via timescale separation arguments. When selection for species is weak and the population size is large but finite, precise conditions are determined for the stochastic dynamics of the SLVC model to be mappable to the neutral Moran model, the Moran model with frequency-independent selection, and the Moran model with frequency-dependent selection (equivalently a game-theoretic formulation of the Moran model). We demonstrate how these mappings can be used to calculate extinction probabilities and the times until a species' extinction in the SLVC model.
Development of the Stochastic Lung Model for Asthma
International Nuclear Information System (INIS)
Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Balashazy, I.
2005-01-01
Complete text of publication follows. The Stochastic Lung Model is a state-of-the-art tool for the investigation of the health impact of atmospheric aerosols. This model has already been tested and applied to calculate the deposition fractions of aerosols in different regions of the human respiratory tract. The health effects of inhaled aerosols may strongly depend on the distribution of deposition within the respiratory tract. In the current study three Asthma Models have been incorporated into the Stochastic Lung Deposition Code. A common new feature of these models is that the breathing cycle may be asymmetric. It means that the inspiration time, the expiration time and the two breath hold times are independent. And the code can simulate the mucus blockage, too. The main characteristics of the models are the followings: a) ASTHMA MODEL I: One input bronchial asthma factor is applied for the whole tracheobronchial region. The code multiplies all tracheobroncial diameters with this single value. b) ASTHMA MODEL II: Bronchial asthma factors have to be given for each bronchial generation as input data (21 values). The program multiplies the diameter of bronchi with these factors. c) ASTHMA MODEL III: Here, only the range of bronchial asthma factors are presented as input data and the code selects randomly the exact factors in pre-described airway generations. In this case the stochastic character appears in the Asthma Model, as well. As an example, Figure 1 shows the deposition fractions in the tracheobronchial and acinar regions of the human lung in the case of healthy and asthmatic adults at sitting breathing conditions as a function of particle size computed by Asthma Model I where the bronchial asthma factor was 30%. These models have been tested and compared for different types of asthma at various breathing conditions and in a wide range of particle sizes. The distribution of deposition in the characteristic regions of the respiratory tract have been computed
Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model
Acquah, H. de-Graft; Onumah, E. E.
2014-01-01
Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
Energy Technology Data Exchange (ETDEWEB)
Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.
Moon, Seulgi; Shelef, Eitan; Hilley, George E.
2015-05-01
In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Simulation of nuclear plant operation into a stochastic energy production model
International Nuclear Information System (INIS)
Pacheco, R.L.
1983-04-01
A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt
Modeling energy price dynamics: GARCH versus stochastic volatility
International Nuclear Information System (INIS)
Chan, Joshua C.C.; Grant, Angelia L.
2016-01-01
We compare a number of GARCH and stochastic volatility (SV) models using nine series of oil, petroleum product and natural gas prices in a formal Bayesian model comparison exercise. The competing models include the standard models of GARCH(1,1) and SV with an AR(1) log-volatility process, as well as more flexible models with jumps, volatility in mean, leverage effects, and t distributed and moving average innovations. We find that: (1) SV models generally compare favorably to their GARCH counterparts; (2) the jump component and t distributed innovations substantially improve the performance of the standard GARCH, but are unimportant for the SV model; (3) the volatility feedback channel seems to be superfluous; (4) the moving average component markedly improves the fit of both GARCH and SV models; and (5) the leverage effect is important for modeling crude oil prices—West Texas Intermediate and Brent—but not for other energy prices. Overall, the SV model with moving average innovations is the best model for all nine series. - Highlights: • We compare a variety of GARCH and SV models for fitting nine series of energy prices. • We find that SV models generally compare favorably to their GARCH counterparts. • The SV model with moving average innovations is the best model for all nine series.
Stochastic Four-State Mechanochemical Model of F1-ATPase
International Nuclear Information System (INIS)
Wu Weixia; Zhan Yong; Zhao Tongjun; Han Yingrong; Chen Yafei
2010-01-01
F 1 -ATPase, a part of ATP synthase, can synthesize and hydrolyze ATP moleculars in which the central γ-subunit rotates inside the α 3 β 3 cylinder. A stochastic four-state mechanochemical coupling model of F 1 -ATPase is studied with the aid of the master equation. In this model, the ATP hydrolysis and synthesis are dependent on ATP, ADP, and Pi concentrations. The effects of ATP concentration, ADP concentration, and the external torque on the occupation probability of binding-state, the rotation rate and the diffusion coefficient of F 1 -ATPase are investigated. Moreover, the results from this model are compared with experiments. The mechanochemical mechanism F 1 -ATPase is qualitatively explained by the model. (general)
Stochastic models of edge turbulent transport in the thermonuclear reactors
International Nuclear Information System (INIS)
Volchenkov, Dima
2005-01-01
Two-dimensional stochastic model of turbulent transport in the scrape-off layer (SOL) of thermonuclear reactors is considered. Convective instability arisen in the system with respect to perturbations reveals itself in the strong outward bursts of particle density propagating ballistically across the SOL. The criterion of stability for the fluctuations of particle density is formulated. A possibility to stabilize the system depends upon the certain type of plasma waves interactions and the certain scenario of turbulence. A bias of limiter surface would provide a fairly good insulation of chamber walls excepting for the resonant cases. Pdf of the particle flux for the large magnitudes of flux events is modeled with a simple discrete time toy model of I-dimensional random walks concluding at the boundary. The spectra of wandering times feature the pdf of particle flux in the model and qualitatively reproduce the experimental statistics of transport events
Stochastic persistence and stationary distribution in an SIS epidemic model with media coverage
Guo, Wenjuan; Cai, Yongli; Zhang, Qimin; Wang, Weiming
2018-02-01
This paper aims to study an SIS epidemic model with media coverage from a general deterministic model to a stochastic differential equation with environment fluctuation. Mathematically, we use the Markov semigroup theory to prove that the basic reproduction number R0s can be used to control the dynamics of stochastic system. Epidemiologically, we show that environment fluctuation can inhibit the occurrence of the disease, namely, in the case of disease persistence for the deterministic model, the disease still dies out with probability one for the stochastic model. So to a great extent the stochastic perturbation under media coverage affects the outbreak of the disease.
Kulasiri, Don
2002-01-01
Most of the natural and biological phenomena such as solute transport in porous media exhibit variability which can not be modeled by using deterministic approaches. There is evidence in natural phenomena to suggest that some of the observations can not be explained by using the models which give deterministic solutions. Stochastic processes have a rich repository of objects which can be used to express the randomness inherent in the system and the evolution of the system over time. The attractiveness of the stochastic differential equations (SDE) and stochastic partial differential equations (SPDE) come from the fact that we can integrate the variability of the system along with the scientific knowledge pertaining to the system. One of the aims of this book is to explaim some useufl concepts in stochastic dynamics so that the scientists and engineers with a background in undergraduate differential calculus could appreciate the applicability and appropriateness of these developments in mathematics. The ideas ...
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Outbreak and Extinction Dynamics in a Stochastic Ebola Model
Nieddu, Garrett; Bianco, Simone; Billings, Lora; Forgoston, Eric; Kaufman, James
A zoonotic disease is a disease that can be passed between animals and humans. In many cases zoonotic diseases can persist in the animal population even if there are no infections in the human population. In this case we call the infected animal population the reservoir for the disease. Ebola virus disease (EVD) and SARS are both notable examples of such diseases. There is little work devoted to understanding stochastic disease extinction and reintroduction in the presence of a reservoir. Here we build a stochastic model for EVD and explicitly consider the presence of an animal reservoir. Using a master equation approach and a WKB ansatz, we determine the associated Hamiltonian of the system. Hamilton's equations are then used to numerically compute the 12-dimensional optimal path to extinction, which is then used to estimate mean extinction times. We also numerically investigate the behavior of the model for dynamic population size. Our results provide an improved understanding of outbreak and extinction dynamics in diseases like EVD.
Aggregation patterns from nonlocal interactions: Discrete stochastic and continuum modeling
Hackett-Jones, Emily J.
2012-04-17
Conservation equations governed by a nonlocal interaction potential generate aggregates from an initial uniform distribution of particles. We address the evolution and formation of these aggregating steady states when the interaction potential has both attractive and repulsive singularities. Currently, no existence theory for such potentials is available. We develop and compare two complementary solution methods, a continuous pseudoinverse method and a discrete stochastic lattice approach, and formally show a connection between the two. Interesting aggregation patterns involving multiple peaks for a simple doubly singular attractive-repulsive potential are determined. For a swarming Morse potential, characteristic slow-fast dynamics in the scaled inverse energy is observed in the evolution to steady state in both the continuous and discrete approaches. The discrete approach is found to be remarkably robust to modifications in movement rules, related to the potential function. The comparable evolution dynamics and steady states of the discrete model with the continuum model suggest that the discrete stochastic approach is a promising way of probing aggregation patterns arising from two- and three-dimensional nonlocal interaction conservation equations. © 2012 American Physical Society.
The Prandtl-Tomlinson model of friction with stochastic driving
Jagla, E. A.
2018-01-01
We consider the classical Prandtl-Tomlinson model of a particle moving on a corrugated potential, pulled by a spring. In the usual situation in which pulling acts at constant velocity \\dotγ , the model displays an average friction force σ that relates to \\dotγ (for small \\dotγ) as \\dotγ˜ (σ-σ_c){\\hspace{0pt}}^β , where σc is a critical friction force. The possible values of β are well known in terms of the analytical properties of the corrugated potential. We study here the situation in which the pulling has, in addition to the constant velocity term, a stochastic term of mechanical origin. We analytically show how this term modifies the force-velocity dependence close to the critical force, and give the value of β in terms of the analytical properties of the corrugation potential and the scaling properties of the stochastic driving, encoded in the value of its Hurst exponent.
Stochastic Modeling and Optimization in a Microgrid: A Survey
Directory of Open Access Journals (Sweden)
Hao Liang
2014-03-01
Full Text Available The future smart grid is expected to be an interconnected network of small-scale and self-contained microgrids, in addition to a large-scale electric power backbone. By utilizing microsources, such as renewable energy sources and combined heat and power plants, microgrids can supply electrical and heat loads in local areas in an economic and environment friendly way. To better adopt the intermittent and weather-dependent renewable power generation, energy storage devices, such as batteries, heat buffers and plug-in electric vehicles (PEVs with vehicle-to-grid systems can be integrated in microgrids. However, significant technical challenges arise in the planning, operation and control of microgrids, due to the randomness in renewable power generation, the buffering effect of energy storage devices and the high mobility of PEVs. The two-way communication functionalities of the future smart grid provide an opportunity to address these challenges, by offering the communication links for microgrid status information collection. However, how to utilize stochastic modeling and optimization tools for efficient, reliable and economic planning, operation and control of microgrids remains an open issue. In this paper, we investigate the key features of microgrids and provide a comprehensive literature survey on the stochastic modeling and optimization tools for a microgrid. Future research directions are also identified.
Maximum likelihood approach for several stochastic volatility models
International Nuclear Information System (INIS)
Camprodon, Jordi; Perelló, Josep
2012-01-01
Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)
A Stochastic Fractional Dynamics Model of Rainfall Statistics
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Developing a new stochastic competitive model regarding inventory and price
Rashid, Reza; Bozorgi-Amiri, Ali; Seyedhoseini, S. M.
2015-09-01
Within the competition in today's business environment, the design of supply chains becomes more complex than before. This paper deals with the retailer's location problem when customers choose their vendors, and inventory costs have been considered for retailers. In a competitive location problem, price and location of facilities affect demands of customers; consequently, simultaneous optimization of the location and inventory system is needed. To prepare a realistic model, demand and lead time have been assumed as stochastic parameters, and queuing theory has been used to develop a comprehensive mathematical model. Due to complexity of the problem, a branch and bound algorithm has been developed, and its performance has been validated in several numerical examples, which indicated effectiveness of the algorithm. Also, a real case has been prepared to demonstrate performance of the model for real world.
Using genetic algorithms to calibrate a water quality model.
Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam
2007-03-15
With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.
Bayesian inference method for stochastic damage accumulation modeling
International Nuclear Information System (INIS)
Jiang, Xiaomo; Yuan, Yong; Liu, Xian
2013-01-01
Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.
The Long Time Behavior of a Stochastic Logistic Model with Infinite Delay and Impulsive Perturbation
Lu, Chun; Wu, Kaining
2016-01-01
This paper considers a stochastic logistic model with infinite delay and impulsive perturbation. Firstly, with the space $C_{g}$ as phase space, the definition of solution to a stochastic functional differential equation with infinite delay and impulsive perturbation is established. According to this definition, we show that our model has an unique global positive solution. Then we establish the sufficient and necessary conditions for extinction and stochastic permanence of the...
Prediction of interest rate using CKLS model with stochastic parameters
International Nuclear Information System (INIS)
Ying, Khor Chia; Hin, Pooi Ah
2014-01-01
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ (j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ (j) , we assume that φ (j) depends on φ (j−m) , φ (j−m+1) ,…, φ (j−1) and the interest rate r j+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r j+n+1 of the interest rate at the next time point when the value r j+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r j+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters
Prediction of interest rate using CKLS model with stochastic parameters
Energy Technology Data Exchange (ETDEWEB)
Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)
2014-06-19
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.
The Asymptotic Behaviour of a Stochastic 3D LANS-α Model
International Nuclear Information System (INIS)
Caraballo, Tomas; Marquez-Duran, Antonio M.; Real, Jose
2006-01-01
The long-time behaviour of a stochastic 3D LANS-α model on a bounded domain is analysed. First, we reformulate the model as an abstract problem. Next, we establish sufficient conditions ensuring the existence of stationary (steady state) solutions of this abstract nonlinear stochastic evolution equation, and study the stability properties of the model. Finally, we analyse the effects produced by stochastic perturbations in the deterministic version of the system (persistence of exponential stability as well as possible stabilisation effects produced by the noise). The general results are applied to our stochastic LANS-α system throughout the paper
Insights into pre-reversal paleosecular variation from stochastic models
Directory of Open Access Journals (Sweden)
Klaudio ePeqini
2015-09-01
Full Text Available To provide insights on the paleosecular variation of the geomagnetic field and the mechanism of reversals, long time series of the dipolar magnetic moment are generated by two different stochastic models, known as the domino model and the inhomogeneous Lebovitz disk dynamo model, with initial values taken from the from paleomagnetic data. The former model considers mutual interactions of N macrospins embedded in a uniformly rotating medium, where random forcing and dissipation act on each macrospin. With an appropriate set of the model’s parameters values, the series generated by this model have similar statistical behaviour to the time series of the SHA.DIF.14K model. The latter model is an extension of the classical two-disk Rikitake model, considering N dynamo elements with appropriate interactions between them.We varied the parameters set of both models aiming at generating suitable time series with behaviour similar to the long time series of recent secular variation (SV. Such series are then extended to the near future, obtaining reversals in both cases of models. The analysis of the time series generated by simulating the models show that the reversals appears after a persistent period of low intensity geomagnetic field, as it is occurring in the present times.
Estimation of some stochastic models used in reliability engineering
International Nuclear Information System (INIS)
Huovinen, T.
1989-04-01
The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method
Epigenetics and Evolution: Transposons and the Stochastic Epigenetic Modification Model
Directory of Open Access Journals (Sweden)
Sergio Branciamore
2015-04-01
Full Text Available In addition to genetic variation, epigenetic variation and transposons can greatly affect the evolutionary fitnesses landscape and gene expression. Previously we proposed a mathematical treatment of a general epigenetic variation model that we called Stochastic Epigenetic Modification (SEM model. In this study we follow up with a special case, the Transposon Silencing Model (TSM, with, once again, emphasis on quantitative treatment. We have investigated the evolutionary effects of epigenetic changes due to transposon (T insertions; in particular, we have considered a typical gene locus A and postulated that (i the expression level of gene A depends on the epigenetic state (active or inactive of a cis- located transposon element T, (ii stochastic variability in the epigenetic silencing of T occurs only in a short window of opportunity during development, (iii the epigenetic state is then stable during further development, and (iv the epigenetic memory is fully reset at each generation. We develop the model using two complementary approaches: a standard analytical population genetics framework (di usion equations and Monte-Carlo simulations. Both approaches led to similar estimates for the probability of fixation and time of fixation of locus TA with initial frequency P in a randomly mating diploid population of effective size Ne. We have ascertained the e ect that ρ, the probability of transposon Modification during the developmental window, has on the population (species. One of our principal conclusions is that as ρ increases, the pattern of fixation of the combined TA locus goes from "neutral" to "dominant" to "over-dominant". We observe that, under realistic values of ρ, epigenetic Modifications can provide an e cient mechanism for more rapid fixation of transposons and cis-located gene alleles. The results obtained suggest that epigenetic silencing, even if strictly transient (being reset at each generation, can still have signi cant
Quasi-continuous stochastic simulation framework for flood modelling
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
A Generic Software Framework for Data Assimilation and Model Calibration
Van Velzen, N.
2010-01-01
The accuracy of dynamic simulation models can be increased by using observations in conjunction with a data assimilation or model calibration algorithm. However, implementing such algorithms usually increases the complexity of the model software significantly. By using concepts from object oriented
Stochastic models for spike trains of single neurons
Sampath, G
1977-01-01
1 Some basic neurophysiology 4 The neuron 1. 1 4 1. 1. 1 The axon 7 1. 1. 2 The synapse 9 12 1. 1. 3 The soma 1. 1. 4 The dendrites 13 13 1. 2 Types of neurons 2 Signals in the nervous system 14 2. 1 Action potentials as point events - point processes in the nervous system 15 18 2. 2 Spontaneous activi~ in neurons 3 Stochastic modelling of single neuron spike trains 19 3. 1 Characteristics of a neuron spike train 19 3. 2 The mathematical neuron 23 4 Superposition models 26 4. 1 superposition of renewal processes 26 4. 2 Superposition of stationary point processe- limiting behaviour 34 4. 2. 1 Palm functions 35 4. 2. 2 Asymptotic behaviour of n stationary point processes superposed 36 4. 3 Superposition models of neuron spike trains 37 4. 3. 1 Model 4. 1 39 4. 3. 2 Model 4. 2 - A superposition model with 40 two input channels 40 4. 3. 3 Model 4. 3 4. 4 Discussion 41 43 5 Deletion models 5. 1 Deletion models with 1nd~endent interaction of excitatory and inhibitory sequences 44 VI 5. 1. 1 Model 5. 1 The basic de...
A mathematical model for camera calibration based on straight lines
Directory of Open Access Journals (Sweden)
Antonio M. G. Tommaselli
2005-12-01
Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.
Setting development goals using stochastic dynamical system models.
Ranganathan, Shyam; Nicolis, Stamatios C; Bali Swain, Ranjula; Sumpter, David J T
2017-01-01
The Millennium Development Goals (MDG) programme was an ambitious attempt to encourage a globalised solution to important but often-overlooked development problems. The programme led to wide-ranging development but it has also been criticised for unrealistic and arbitrary targets. In this paper, we show how country-specific development targets can be set using stochastic, dynamical system models built from historical data. In particular, we show that the MDG target of two-thirds reduction of child mortality from 1990 levels was infeasible for most countries, especially in sub-Saharan Africa. At the same time, the MDG targets were not ambitious enough for fast-developing countries such as Brazil and China. We suggest that model-based setting of country-specific targets is essential for the success of global development programmes such as the Sustainable Development Goals (SDG). This approach should provide clear, quantifiable targets for policymakers.
Modeling collective emotions: a stochastic approach based on Brownian agents
International Nuclear Information System (INIS)
Schweitzer, F.
2010-01-01
We develop a agent-based framework to model the emergence of collective emotions, which is applied to online communities. Agents individual emotions are described by their valence and arousal. Using the concept of Brownian agents, these variables change according to a stochastic dynamics, which also considers the feedback from online communication. Agents generate emotional information, which is stored and distributed in a field modeling the online medium. This field affects the emotional states of agents in a non-linear manner. We derive conditions for the emergence of collective emotions, observable in a bimodal valence distribution. Dependent on a saturated or a super linear feedback between the information field and the agent's arousal, we further identify scenarios where collective emotions only appear once or in a repeated manner. The analytical results are illustrated by agent-based computer simulations. Our framework provides testable hypotheses about the emergence of collective emotions, which can be verified by data from online communities. (author)
Stochastic motion of a particle in a model fluctuating medium
International Nuclear Information System (INIS)
Moreau, M.; Gaveau, B.; Perera, A.; Frankowicz, M.
1993-01-01
We present several models of time fluctuating media with finite memory, consisting in one and two-dimensional lattices, the Modes of which fluctuate between two internal states according to a Poisson process. A particle moves on the lattice, the diffusion by the Modes depending on their internal state. Such models can be used for the microscopic theory of reaction constants in a dense phase, or for the study of diffusion or reactivity in a complex medium. In a number of cases, the transmission probability of the medium is computed exactly; it is shown that stochastic resonances can occur, an optimal transmission being obtained for a convenient choice of parameters. In more general situations, approximate solutions are given in the case of short and moderate memory of the obstacles. The diffusion in an infinite two-dimensional lattice is studied, and the memory is shown to affect the distribution of the particles rather than the diffusion law. (author). 25 refs, 5 figs
On the stochastic dynamics of disordered spin models
International Nuclear Information System (INIS)
Semerjian, G.; Montanari, A.; Cugliandolo, L.F.
2003-09-01
In this article we discuss several aspects of the stochastic dynamics of spin models. The paper has two independent parts. Firstly, we explore a few properties of the multi-point correlations and responses of generic systems evolving in equilibrium with a thermal bath. We propose a fluctuation principle that allows us to derive fluctuation-dissipation relations for many-time correlations and linear responses. We also speculate on how these features will be modified in systems evolving slowly out of equilibrium, as finite-dimensional or dilute spin-glasses. Secondly, we present a formalism that allows one to derive a series of approximated equations that determine the dynamics of disordered spin models on random (hyper) graphs. (author)
Stochastic modeling of mode interactions via linear parabolized stability equations
Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo
2017-11-01
Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran
Learning-based stochastic object models for characterizing anatomical variations
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
Stochastic process corrosion growth models for pipeline reliability
International Nuclear Information System (INIS)
Bazán, Felipe Alexander Vargas; Beck, André Teófilo
2013-01-01
Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics
Stochastic linear hybrid systems: Modeling, estimation, and application
Seah, Chze Eng
Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS
International Nuclear Information System (INIS)
Valor, A.; Caleyo, F.; Alfonso, L.; Rivas, D.; Hallen, J.M.
2007-01-01
In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion
Energy Technology Data Exchange (ETDEWEB)
Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 Havana (Cuba); Caleyo, F. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)]. E-mail: fcaleyo@gmail.com; Alfonso, L. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Rivas, D. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Hallen, J.M. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)
2007-02-15
In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion.
Model calibration and beam control systems for storage rings
International Nuclear Information System (INIS)
Corbett, W.J.; Lee, M.J.; Ziemann, V.
1993-04-01
Electron beam storage rings and linear accelerators are rapidly gaining worldwide popularity as scientific devices for the production of high-brightness synchrotron radiation. Today, everybody agrees that there is a premium on calibrating the storage ring model and determining errors in the machine as soon as possible after the beam is injected. In addition, the accurate optics model enables machine operators to predictably adjust key performance parameters, and allows reliable identification of new errors that occur during operation of the machine. Since the need for model calibration and beam control systems is common to all storage rings, software packages should be made that are portable between different machines. In this paper, we report on work directed toward achieving in-situ calibration of the optics model, detection of alignment errors, and orbit control techniques, with an emphasis on developing a portable system incorporating these tools
The cost of uniqueness in groundwater model calibration
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration
Quantifying intrinsic and extrinsic variability in stochastic gene expression models.
Singh, Abhyudai; Soltani, Mohammad
2013-01-01
Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.
A stochastic MILP energy planning model incorporating power market dynamics
International Nuclear Information System (INIS)
Koltsaklis, Nikolaos E.; Nazos, Konstantinos
2017-01-01
Highlights: •Stochastic MILP model for the optimal energy planning of a power system. •Power market dynamics (offers/bids) are incorporated in the proposed model. •Monte Carlo method for capturing the uncertainty of some key parameters. •Analytical supply cost composition per power producer and activity. •Clean dark and spark spreads are calculated for each power unit. -- Abstract: This paper presents an optimization-based methodological approach to address the problem of the optimal planning of a power system at an annual level in competitive and uncertain power markets. More specifically, a stochastic mixed integer linear programming model (MILP) has been developed, combining advanced optimization techniques with Monte Carlo method in order to deal with uncertainty issues. The main focus of the proposed framework is the dynamic formulation of the strategy followed by all market participants in volatile market conditions, as well as detailed economic assessment of the power system’s operation. The applicability of the proposed approach has been tested on a real case study of the interconnected Greek power system, quantifying in detail all the relevant technical and economic aspects of the system’s operation. The proposed work identifies in the form of probability distributions the optimal power generation mix, electricity trade at a regional level, carbon footprint, as well as detailed total supply cost composition, according to the assumed market structure. The paper demonstrates that the proposed optimization approach is able to provide important insights into the appropriate energy strategies designed by market participants, as well as on the strategic long-term decisions to be made by investors and/or policy makers at a national and/or regional level, underscoring potential risks and providing appropriate price signals on critical energy projects under real market operating conditions.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Optimal Stochastic Modeling and Control of Flexible Structures
1988-09-01
1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic
FluTE, a publicly available stochastic influenza epidemic simulation model.
Directory of Open Access Journals (Sweden)
Dennis L Chao
2010-01-01
Full Text Available Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2 and 2009 pandemic A(H1N1 influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.
FluTE, a publicly available stochastic influenza epidemic simulation model.
Chao, Dennis L; Halloran, M Elizabeth; Obenchain, Valerie J; Longini, Ira M
2010-01-29
Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.
A low-bias simulation scheme for the SABR stochastic volatility model
B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide
2012-01-01
htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)
On cross-currency models with stochastic volatility and correlated interest rates
Grzelak, L.A.; Oosterlee, C.W.
2010-01-01
We construct multi-currency models with stochastic volatility and correlated stochastic interest rates with a full matrix of correlations. We first deal with a foreign exchange (FX) model of Heston-type, in which the domestic and foreign interest rates are generated by the short-rate process of
Oriented stochastic data envelopment models: ranking comparison to stochastic frontier approach
Czech Academy of Sciences Publication Activity Database
Brázdik, František
-, č. 271 (2005), s. 1-46 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : stochastic data envelopment analysis * linear programming * rice farm Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp271.pdf
Modeling Stochastic Route Choice Behaviors with Equivalent Impedance
Directory of Open Access Journals (Sweden)
Jun Li
2015-01-01
Full Text Available A Logit-based route choice model is proposed to address the overlapping and scaling problems in the traditional multinomial Logit model. The nonoverlapping links are defined as a subnetwork, and its equivalent impedance is explicitly calculated in order to simply network analyzing. The overlapping links are repeatedly merged into subnetworks with Logit-based equivalent travel costs. The choice set at each intersection comprises only the virtual equivalent route without overlapping. In order to capture heterogeneity in perception errors of different sizes of networks, different scale parameters are assigned to subnetworks and they are linked to the topological relationships to avoid estimation burden. The proposed model provides an alternative method to model the stochastic route choice behaviors without the overlapping and scaling problems, and it still maintains the simple and closed-form expression from the MNL model. A link-based loading algorithm based on Dial’s algorithm is proposed to obviate route enumeration and it is suitable to be applied on large-scale networks. Finally a comparison between the proposed model and other route choice models is given by numerical examples.
A stochastic differential equation model of diurnal cortisol patterns
Brown, E. N.; Meehan, P. M.; Dempster, A. P.
2001-01-01
Circadian modulation of episodic bursts is recognized as the normal physiological pattern of diurnal variation in plasma cortisol levels. The primary physiological factors underlying these diurnal patterns are the ultradian timing of secretory events, circadian modulation of the amplitude of secretory events, infusion of the hormone from the adrenal gland into the plasma, and clearance of the hormone from the plasma by the liver. Each measured plasma cortisol level has an error arising from the cortisol immunoassay. We demonstrate that all of these three physiological principles can be succinctly summarized in a single stochastic differential equation plus measurement error model and show that physiologically consistent ranges of the model parameters can be determined from published reports. We summarize the model parameters in terms of the multivariate Gaussian probability density and establish the plausibility of the model with a series of simulation studies. Our framework makes possible a sensitivity analysis in which all model parameters are allowed to vary simultaneously. The model offers an approach for simultaneously representing cortisol's ultradian, circadian, and kinetic properties. Our modeling paradigm provides a framework for simulation studies and data analysis that should be readily adaptable to the analysis of other endocrine hormone systems.
Application of a Theorem in Stochastic Models of Elections
Directory of Open Access Journals (Sweden)
Norman Schofield
2010-01-01
Full Text Available Previous empirical research has developed stochastic electoral models for Israel, Turkey, and other polities. The work suggests that convergence to an electoral center (often predicted by electoral models is a nongeneric phenomenon. In an attempt to explain nonconvergence, a formal model based on intrinsic valence is presented. This theory showed that there are necessary and sufficient conditions for convergence. The necessary condition is that a convergence coefficient c is bounded above by the dimension w of the policy space, while a sufficient condition is that the coefficient is bounded above by 1. This coefficient is defined in terms of the difference in exogenous valences, the “spatial coefficient”, and the electoral variance. The theoretical model is then applied to empirical analyses of elections in the United States and Britain. These empirical models include sociodemographic valence and electoral perceptions of character trait. It is shown that the model implies convergence to positions close to the electoral origin. To explain party divergence, the model is then extended to incorporate activist valences. This extension gives a first-order balance condition that allows the party to calculate the optimal marginal condition to maximize vote share. We argue that the equilibrium positions of presidential candidates in US elections and by party leaders in British elections are principally due to the influence of activists, rather than the centripetal effect of the electorate.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Porters versus rowers: a unified stochastic model of motor proteins.
Leibler, S; Huse, D A
1993-06-01
We present a general phenomenological theory for chemical to mechanical energy transduction by motor enzymes which is based on the classical "tight-coupling" mechanism. The associated minimal stochastic model takes explicitly into account both ATP hydrolysis and thermal noise effects. It provides expressions for the hydrolysis rate and the sliding velocity, as functions of the ATP concentration and the number of motor enzymes. It explains in a unified way many results of recent in vitro motility assays. More importantly, the theory provides a natural classification scheme for the motors: it correlates the biochemical and mechanical differences between "porters" such as cellular kinesins or dyneins, and "rowers" such as muscular myosins or flagellar dyneins.
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....
Bayesian calibration of power plant models for accurate performance prediction
International Nuclear Information System (INIS)
Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der
2014-01-01
Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions
Calibration and Confirmation in Geophysical Models
Werndl, Charlotte
2016-04-01
For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.
Applying Hierarchical Model Calibration to Automatically Generated Items.
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.
This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…
Cloud-Based Model Calibration Using OpenStudio: Preprint
Energy Technology Data Exchange (ETDEWEB)
Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.
2014-03-01
OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
Calibrating cellular automaton models for pedestrians walking through corners
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
STAVREV, A.
2013-03-01
The uncertainty of geometric imperfections in a series of nominally equal I-beams leads to a variability of corresponding buckling loads. Its analysis requires a stochastic imperfection model, which can be derived either by the simple variation of the critical eigenmode with a scalar random variable, or with the help of the more advanced theory of random fields. The present paper first provides a concise review of the two different modeling approaches, covering theoretical background, assumptions and calibration, and illustrates their integration into commercial finite element software to conduct stochastic buckling analyses with the Monte-Carlo method. The stochastic buckling behavior of an example beam is then simulated with both stochastic models, calibrated from corresponding imperfection measurements. The simulation results show that for different load cases, the response statistics of the buckling load obtained with the eigenmode-based and the random field-based models agree very well. A comparison of our simulation results with corresponding Eurocode 3 limit loads indicates that the design standard is very conservative for compression dominated load cases. © 2013 World Scientific Publishing Company.
Aspects if stochastic models for short-term hydropower scheduling and bidding
Energy Technology Data Exchange (ETDEWEB)
Belsnes, Michael Martin [Sintef Energy, Trondheim (Norway); Follestad, Turid [Sintef Energy, Trondheim (Norway); Wolfgang, Ove [Sintef Energy, Trondheim (Norway); Fosso, Olav B. [Dep. of electric power engineering NTNU, Trondheim (Norway)
2012-07-01
This report discusses challenges met when turning from deterministic to stochastic decision support models for short-term hydropower scheduling and bidding. The report describes characteristics of the short-term scheduling and bidding problem, different market and bidding strategies, and how a stochastic optimization model can be formulated. A review of approaches for stochastic short-term modelling and stochastic modelling for the input variables inflow and market prices is given. The report discusses methods for approximating the predictive distribution of uncertain variables by scenario trees. Benefits of using a stochastic over a deterministic model are illustrated by a case study, where increased profit is obtained to a varying degree depending on the reservoir filling and price structure. Finally, an approach for assessing the effect of using a size restricted scenario tree to approximate the predictive distribution for stochastic input variables is described. The report is a summary of the findings of Work package 1 of the research project #Left Double Quotation Mark#Optimal short-term scheduling of wind and hydro resources#Right Double Quotation Mark#. The project aims at developing a prototype for an operational stochastic short-term scheduling model. Based on the investigations summarized in the report, it is concluded that using a deterministic equivalent formulation of the stochastic optimization problem is convenient and sufficient for obtaining a working prototype. (author)
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
Stochastic Model for the Vocabulary Growth in Natural Languages
Directory of Open Access Journals (Sweden)
Martin Gerlach
2013-05-01
Full Text Available We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.
Matrix models and stochastic growth in Donaldson-Thomas theory
Energy Technology Data Exchange (ETDEWEB)
Szabo, Richard J. [Department of Mathematics, Heriot-Watt University, Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, United Kingdom and Maxwell Institute for Mathematical Sciences, Edinburgh (United Kingdom); Tierz, Miguel [Grupo de Fisica Matematica, Complexo Interdisciplinar da Universidade de Lisboa, Av. Prof. Gama Pinto, 2, PT-1649-003 Lisboa (Portugal); Departamento de Analisis Matematico, Facultad de Ciencias Matematicas, Universidad Complutense de Madrid, Plaza de Ciencias 3, 28040 Madrid (Spain)
2012-10-15
We show that the partition functions which enumerate Donaldson-Thomas invariants of local toric Calabi-Yau threefolds without compact divisors can be expressed in terms of specializations of the Schur measure. We also discuss the relevance of the Hall-Littlewood and Jack measures in the context of BPS state counting and study the partition functions at arbitrary points of the Kaehler moduli space. This rewriting in terms of symmetric functions leads to a unitary one-matrix model representation for Donaldson-Thomas theory. We describe explicitly how this result is related to the unitary matrix model description of Chern-Simons gauge theory. This representation is used to show that the generating functions for Donaldson-Thomas invariants are related to tau-functions of the integrable Toda and Toeplitz lattice hierarchies. The matrix model also leads to an interpretation of Donaldson-Thomas theory in terms of non-intersecting paths in the lock-step model of vicious walkers. We further show that these generating functions can be interpreted as normalization constants of a corner growth/last-passage stochastic model.
Matrix models and stochastic growth in Donaldson-Thomas theory
International Nuclear Information System (INIS)
Szabo, Richard J.; Tierz, Miguel
2012-01-01
We show that the partition functions which enumerate Donaldson-Thomas invariants of local toric Calabi-Yau threefolds without compact divisors can be expressed in terms of specializations of the Schur measure. We also discuss the relevance of the Hall-Littlewood and Jack measures in the context of BPS state counting and study the partition functions at arbitrary points of the Kähler moduli space. This rewriting in terms of symmetric functions leads to a unitary one-matrix model representation for Donaldson-Thomas theory. We describe explicitly how this result is related to the unitary matrix model description of Chern-Simons gauge theory. This representation is used to show that the generating functions for Donaldson-Thomas invariants are related to tau-functions of the integrable Toda and Toeplitz lattice hierarchies. The matrix model also leads to an interpretation of Donaldson-Thomas theory in terms of non-intersecting paths in the lock-step model of vicious walkers. We further show that these generating functions can be interpreted as normalization constants of a corner growth/last-passage stochastic model.
Integral projection models for finite populations in a stochastic environment.
Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik
2011-05-01
Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.
Controlled Nonlinear Stochastic Delay Equations: Part I: Modeling and Approximations
International Nuclear Information System (INIS)
Kushner, Harold J.
2012-01-01
This two-part paper deals with “foundational” issues that have not been previously considered in the modeling and numerical optimization of nonlinear stochastic delay systems. There are new classes of models, such as those with nonlinear functions of several controls (such as products), each with is own delay, controlled random Poisson measure driving terms, admissions control with delayed retrials, and others. There are two basic and interconnected themes for these models. The first, dealt with in this part, concerns the definition of admissible control. The classical definition of an admissible control as a nonanticipative relaxed control is inadequate for these models and needs to be extended. This is needed for the convergence proofs of numerical approximations for optimal controls as well as to have a well-defined model. It is shown that the new classes of admissible controls do not enlarge the range of the value functions, is closed (together with the associated paths) under weak convergence, and is approximatable by ordinary controls. The second theme, dealt with in Part II, concerns transportation equation representations, and their role in the development of numerical algorithms with much reduced memory and computational requirements.
A single model procedure for tank calibration function estimation
International Nuclear Information System (INIS)
York, J.C.; Liebetrau, A.M.
1995-01-01
Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages
MT3DMS: Model use, calibration, and validation
Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.
2012-01-01
MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.
Effect of Using Extreme Years in Hydrologic Model Calibration Performance
Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.
2017-12-01
Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.
Crisan, Dan
2011-01-01
"Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa
Optical model and calibration of a sun tracker
International Nuclear Information System (INIS)
Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon
2016-01-01
Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker. - Highlights: • We present an optimal optical sun tracker model for atmospheric spectroscopy. • The problem of loss of stability of tracker pointing at the Sun has been solved. • We propose an optimal method for tracker calibration at a measurement site. • Test results demonstrate the efficiency of the proposed optimization methods.
A stochastic discrete optimization model for designing container terminal facilities
Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista
2017-11-01
As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.
A continuous stochastic model for non-equilibrium dense gases
Sadr, M.; Gorji, M. H.
2017-12-01
While accurate simulations of dense gas flows far from the equilibrium can be achieved by direct simulation adapted to the Enskog equation, the significant computational demand required for collisions appears as a major constraint. In order to cope with that, an efficient yet accurate solution algorithm based on the Fokker-Planck approximation of the Enskog equation is devised in this paper; the approximation is very much associated with the Fokker-Planck model derived from the Boltzmann equation by Jenny et al. ["A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion," J. Comput. Phys. 229, 1077-1098 (2010)] and Gorji et al. ["Fokker-Planck model for computational studies of monatomic rarefied gas flows," J. Fluid Mech. 680, 574-601 (2011)]. The idea behind these Fokker-Planck descriptions is to project the dynamics of discrete collisions implied by the molecular encounters into a set of continuous Markovian processes subject to the drift and diffusion. Thereby, the evolution of particles representing the governing stochastic process becomes independent from each other and thus very efficient numerical schemes can be constructed. By close inspection of the Enskog operator, it is observed that the dense gas effects contribute further to the advection of molecular quantities. That motivates a modelling approach where the dense gas corrections can be cast in the extra advection of particles. Therefore, the corresponding Fokker-Planck approximation is derived such that the evolution in the physical space accounts for the dense effects present in the pressure, stress tensor, and heat fluxes. Hence the consistency between the devised Fokker-Planck approximation and the Enskog operator is shown for the velocity moments up to the heat fluxes. For validation studies, a homogeneous gas inside a box besides Fourier, Couette, and lid-driven cavity flow setups is considered. The results based on the Fokker-Planck model are
Kozel, Tomas; Stary, Milos
2017-12-01
The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for
Bayesian calibration of the Community Land Model using surrogates
Energy Technology Data Exchange (ETDEWEB)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
Directory of Open Access Journals (Sweden)
Spiros Pagiatakis
2009-10-01
Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Calibration of hydrological models using flow-duration curves
Directory of Open Access Journals (Sweden)
I. K. Westerberg
2011-07-01
Full Text Available The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1 uncertain discharge data, (2 variable sensitivity of different performance measures to different flow magnitudes, (3 influence of unknown input/output errors and (4 inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested – based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of
Comparison of stochastic models in Monte Carlo simulation of coated particle fuels
International Nuclear Information System (INIS)
Yu Hui; Nam Zin Cho
2013-01-01
There is growing interest worldwide in very high temperature gas cooled reactors as candidates for next generation reactor systems. For design and analysis of such reactors with double heterogeneity introduced by the coated particle fuels that are randomly distributed in graphite pebbles, stochastic transport models are becoming essential. Several models were reported in the literature, such as coarse lattice models, fine lattice stochastic (FLS) models, random sequential addition (RSA) models, metropolis models. The principles and performance of these stochastic models are described and compared in this paper. Compared with the usual fixed lattice methods, sub-FLS modeling allows more realistic stochastic distribution of fuel particles and thus results in more accurate criticality calculation. Compared with the basic RSA method, sub-FLS modeling requires simpler and more efficient overlapping checking procedure. (authors)
ARIMA-Based Time Series Model of Stochastic Wind Power Generation
DEFF Research Database (Denmark)
Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte
2010-01-01
This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...
The Threshold of a Stochastic SIRS Model with Vertical Transmission and Saturated Incidence
Directory of Open Access Journals (Sweden)
Chunjuan Zhu
2017-01-01
Full Text Available The threshold of a stochastic SIRS model with vertical transmission and saturated incidence is investigated. If the noise is small, it is shown that the threshold of the stochastic system determines the extinction and persistence of the epidemic. In addition, we find that if the noise is large, the epidemic still prevails. Finally, numerical simulations are given to illustrate the results.
van der Linden, Willem J.
1995-01-01
Dichotomous item response theory (IRT) models can be viewed as families of stochastically ordered distributions of responses to test items. This paper explores several properties of such distributiom. The focus is on the conditions under which stochastic order in families of conditional
Hybrid approaches for multiple-species stochastic reaction–diffusion models
International Nuclear Information System (INIS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Energy Technology Data Exchange (ETDEWEB)
Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.