Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Energy Technology Data Exchange (ETDEWEB)
Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro
2015-01-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
International Nuclear Information System (INIS)
Garnier, Robert; Chevalier, Marcel
2000-01-01
Studying large and complex industrial sites, requires more and more accuracy in modeling. In particular, when considering Spares, Maintenance and Repair / Replacement processes, determining optimal Integrated Logistic Support policies requires a high level modeling formalism, in order to make the model as close as possible to the real considered processes. Generally, numerical methods are used to process this kind of study. In this paper, we propose an alternate way to process optimal Integrated Logistic Support policy determination when dealing with large, complex and distributed multi-policies industrial sites. This method is based on the use of behavioral Monte Carlo simulation, supported by Generalized Stochastic Petri Nets. (author)
Stochastic models: theory and simulation.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-11-30
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.
International Nuclear Information System (INIS)
Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru
2010-01-01
Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P r is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis–Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca 2+ dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events
MarkoLAB: A simulator to study ionic channel's stochastic behavior.
da Silva, Robson Rodrigues; Goroso, Daniel Gustavo; Bers, Donald M; Puglisi, José Luis
2017-08-01
Mathematical models of the cardiac cell have started to include markovian representations of the ionic channels instead of the traditional Hodgkin & Huxley formulations. There are many reasons for this: Markov models are not restricted to the idea of independent gates defining the channel, they allow more complex description with specific transitions between open, closed or inactivated states, and more importantly those states can be closely related to the underlying channel structure and conformational changes. We used the LabVIEW ® and MATLAB ® programs to implement the simulator MarkoLAB that allow a dynamical 3D representation of the markovian model of the channel. The Monte Carlo simulation was used to implement the stochastic transitions among states. The user can specify the voltage protocol by setting the holding potential, the step-to voltage and the duration of the stimuli. The most studied feature of a channel is the current flowing through it. This happens when the channel stays in the open state, but most of the time, as revealed by the low open probability values, the channel remains on the inactive or closed states. By focusing only when the channel enters or leaves the open state we are missing most of its activity. MarkoLAB proved to be quite useful to visualize the whole behavior of the channel and not only when the channel produces a current. Such dynamic representation provides more complete information about channel kinetics and will be a powerful tool to demonstrate the effect of gene mutations or drugs on the channel function. MarkoLAB provides an original way of visualizing the stochastic behavior of a channel. It clarifies concepts, such as recovery from inactivation, calcium- versus voltage-dependent inactivation, and tail currents. It is not restricted to ionic channels only but it can be extended to other transporters, such as exchangers and pumps. This program is intended as a didactical tool to illustrate the dynamical behavior of a
AESS: Accelerated Exact Stochastic Simulation
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution
Stochastic modeling analysis and simulation
Nelson, Barry L
1995-01-01
A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se
Directory of Open Access Journals (Sweden)
E Scholtz
2012-12-01
Full Text Available The cash management of an autoteller machine (ATM is a multi-objective optimisation problem which aims to maximise the service level provided to customers at minimum cost. This paper focus on improved cash management in a section of the South African retail banking industry, for which a decision support system (DSS was developed. This DSS integrates four Operations Research (OR methods: the vehicle routing problem (VRP, the continuous review policy for inventory management, the knapsack problem and stochastic, discrete-event simulation. The DSS was applied to an ATM network in the Eastern Cape, South Africa, to investigate 90 different scenarios. Results show that the application of a formal vehicle routing method consistently yields higher service levels at lower cost when compared to two other routing approaches, in conjunction with selected ATM reorder levels and a knapsack-based notes dispensing algorithm. It is concluded that the use of vehicle routing methods is especially beneficial when the bank has substantial control over transportation cost.
Stochastic airspace simulation tool development
2009-10-01
Modeling and simulation is often used to study : the physical world when observation may not be : practical. The overall goal of a recent and ongoing : simulation tool project has been to provide a : documented, lifecycle-managed, multi-processor : c...
Stochastic analysis for finance with simulations
Choe, Geon Ho
2016-01-01
This book is an introduction to stochastic analysis and quantitative finance; it includes both theoretical and computational methods. Topics covered are stochastic calculus, option pricing, optimal portfolio investment, and interest rate models. Also included are simulations of stochastic phenomena, numerical solutions of the Black–Scholes–Merton equation, Monte Carlo methods, and time series. Basic measure theory is used as a tool to describe probabilistic phenomena. The level of familiarity with computer programming is kept to a minimum. To make the book accessible to a wider audience, some background mathematical facts are included in the first part of the book and also in the appendices. This work attempts to bridge the gap between mathematics and finance by using diagrams, graphs and simulations in addition to rigorous theoretical exposition. Simulations are not only used as the computational method in quantitative finance, but they can also facilitate an intuitive and deeper understanding of theoret...
DEFF Research Database (Denmark)
Nielsen, Steen
2000-01-01
This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....
Directory of Open Access Journals (Sweden)
Kohei Sasaki
2012-01-01
Full Text Available The radiation-induced bystander effect (RIBE has been experimentally observed for different types of radiation, cell types, and cell culture conditions. However, the behavior of signal transmission between unirradiated and irradiated cells is not well known. In this study, we have developed a new model for RIBE based on the diffusion of soluble factors in cell cultures using a Monte Carlo technique. The model involves the signal emission probability from bystander cells following Poisson statistics. Simulations with this model show that the spatial configuration of the bystander cells agrees well with that of corresponding experiments, where the optimal emission probability is estimated through a large number of simulation runs. It was suggested that the most likely probability falls within 0.63–0.92 for mean number of the emission signals ranging from 1.0 to 2.5.
Simulation of Stochastic Loads for Fatigue Experiments
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Brincker, Rune
1989-01-01
process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and results in a simulation speed of about 3000 load cycles per second......A simple direct simulation method for stochastic fatigue-load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...
Simulation of Stochastic Loads for Fatigue Experiments
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Brincker, Rune
process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and is results in a simulation speed at about 3000 load cycles per......A simple direct simulation method for stochastic fatigue load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...
Monte Carlo simulation of fully Markovian stochastic geometries
International Nuclear Information System (INIS)
Lepage, Thibaut; Delaby, Lucie; Malvagi, Fausto; Mazzolo, Alain
2010-01-01
The interest in resolving the equation of transport in stochastic media has continued to increase these last years. For binary stochastic media it is often assumed that the geometry is Markovian, which is never the case in usual environments. In the present paper, based on rigorous mathematical theorems, we construct fully two-dimensional Markovian stochastic geometries and we study their main properties. In particular, we determine a percolation threshold p c , equal to 0.586 ± 0.0015 for such geometries. Finally, Monte Carlo simulations are performed through these geometries and the results compared to homogeneous geometries. (author)
Stochastic Simulation of Process Calculi for Biology
Directory of Open Access Journals (Sweden)
Andrew Phillips
2010-10-01
Full Text Available Biological systems typically involve large numbers of components with complex, highly parallel interactions and intrinsic stochasticity. To model this complexity, numerous programming languages based on process calculi have been developed, many of which are expressive enough to generate unbounded numbers of molecular species and reactions. As a result of this expressiveness, such calculi cannot rely on standard reaction-based simulation methods, which require fixed numbers of species and reactions. Rather than implementing custom stochastic simulation algorithms for each process calculus, we propose to use a generic abstract machine that can be instantiated to a range of process calculi and a range of reaction-based simulation algorithms. The abstract machine functions as a just-in-time compiler, which dynamically updates the set of possible reactions and chooses the next reaction in an iterative cycle. In this short paper we give a brief summary of the generic abstract machine, and show how it can be instantiated with the stochastic simulation algorithm known as Gillespie's Direct Method. We also discuss the wider implications of such an abstract machine, and outline how it can be used to simulate multiple calculi simultaneously within a common framework.
Directory of Open Access Journals (Sweden)
Flávia Melo Rodrigues
2007-01-01
Full Text Available A frequently addressed question in conservation biology is what is the chance of survival for a population for a given number of years under certain conditions of habitat loss and human activities. This can be estimated through an integrated analysis of genetic, demographic and landscape processes, which allows the prediction of more realistic and precise models of population persistence. In this study, we modeled extinction in stochastic environments under inbreeding depression for two canid species, the maned wolf (Chrysocyon brachiurus and the crab-eating fox (Cerdocyon thous, in southwest Goiás State. Genetic parameters were obtained from six microsattelite loci (Short Tandem Repeats - STR, which allowed estimates of inbreeding levels and of the effective population size under a stepwise mutation model based on heterozygosis. The simulations included twelve alternative scenarios with varying rates of habitat loss, magnitude of population fluctuation and initial inbreeding levels. ANOVA analyses of the simulation results showed that times to extinction were better explained by demographic parameters. Times to extinction ranged from 352 to 844, in the worst and best scenario, respectively, for the large-bodied maned wolf. For the small-bodied crab-eating fox, these same estimates were 422 and 974 years. Simulations results are within the expectation based on knowledge about species' life history, genetics and demography. They suggest that populations can persist through a reasonable time (i.e., more than 200 years even under the worst demographic scenario. Our analyses are a starting point for a more focused evaluation of persistence in these populations. Our results can be used in future research aiming at obtaining better estimates of parameters that may, in turn, be used to achieve more appropriate and realist population viability models at a regional scale.
Quantum simulation of a quantum stochastic walk
Govia, Luke C. G.; Taketani, Bruno G.; Schuhmacher, Peter K.; Wilhelm, Frank K.
2017-03-01
The study of quantum walks has been shown to have a wide range of applications in areas such as artificial intelligence, the study of biological processes, and quantum transport. The quantum stochastic walk (QSW), which allows for incoherent movement of the walker, and therefore, directionality, is a generalization on the fully coherent quantum walk. While a QSW can always be described in Lindblad formalism, this does not mean that it can be microscopically derived in the standard weak-coupling limit under the Born-Markov approximation. This restricts the class of QSWs that can be experimentally realized in a simple manner. To circumvent this restriction, we introduce a technique to simulate open system evolution on a fully coherent quantum computer, using a quantum trajectories style approach. We apply this technique to a broad class of QSWs, and show that they can be simulated with minimal experimental resources. Our work opens the path towards the experimental realization of QSWs on large graphs with existing quantum technologies.
Software Tools for Stochastic Simulations of Turbulence
2015-08-28
40] R. D. Richtmyer. Taylor instability in shock acceleration of compressible fluids. Comm. pure Appl. Math , 13(297-319), 1960. 76 [41] R. Samulyak, J...Research Triangle Park, NC 27709-2211 Pure sciences, Applied sciences, Front tracking, Large eddy simulations, Mesh convergence, Stochastic convergence, Weak...Illustration of a component grid with a front crossing solution stencil. Cells in the pure yellow and pure blue regions are assigned different components
Valent, Peter; Paquet, Emmanuel
2017-09-01
A reliable estimate of extreme flood characteristics has always been an active topic in hydrological research. Over the decades a large number of approaches and their modifications have been proposed and used, with various methods utilizing continuous simulation of catchment runoff, being the subject of the most intensive research in the last decade. In this paper a new and promising stochastic semi-continuous method is used to estimate extreme discharges in two mountainous Slovak catchments of the rivers Váh and Hron, in which snow-melt processes need to be taken into account. The SCHADEX method used, couples a precipitation probabilistic model with a rainfall-runoff model used to both continuously simulate catchment hydrological conditions and to transform generated synthetic rainfall events into corresponding discharges. The stochastic nature of the method means that a wide range of synthetic rainfall events were simulated on various historical catchment conditions, taking into account not only the saturation of soil, but also the amount of snow accumulated in the catchment. The results showed that the SCHADEX extreme discharge estimates with return periods of up to 100 years were comparable to those estimated by statistical approaches. In addition, two reconstructed historical floods with corresponding return periods of 100 and 1000 years were compared to the SCHADEX estimates. The results confirmed the usability of the method for estimating design discharges with a recurrence interval of more than 100 years and its applicability in Slovak conditions.
Improved operating strategies for uranium extraction: a stochastic simulation
International Nuclear Information System (INIS)
Broekman, B.R.
1986-01-01
Deterministic and stochastic simulations of a Western Transvaal uranium process are used in this research report to determine more profitable uranium plant operating strategies and to gauge the potential financial benefits of automatic process control. The deterministic simulation model was formulated using empirical and phenomenological process models. The model indicated that profitability increases significantly as the uranium leaching strategy becomes harsher. The stochastic simulation models use process variable distributions corresponding to manually and automatically controlled conditions to investigate the economic gains that may be obtained if a change is made from manual to automatic control of two important process variables. These lognormally distributed variables are the pachuca 1 sulphuric acid concentration and the ferric to ferrous ratio. The stochastic simulations show that automatic process control is justifiable in certain cases. Where the leaching strategy is relatively harsh, such as that in operation during January 1986, it is not possible to justify an automatic control system. Automatic control is, however, justifiable if a relatively mild leaching strategy is adopted. The stochastic and deterministic simulations represent two different approaches to uranium process modelling. This study has indicated the necessity for each approach to be applied in the correct context. It is contended that incorrect conclusions may have been drawn by other investigators in South Africa who failed to consider the two approaches separately
Numerical Simulation of the Heston Model under Stochastic Correlation
Directory of Open Access Journals (Sweden)
Long Teng
2017-12-01
Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.
Multiscale Hy3S: Hybrid stochastic simulation for supercomputers
Directory of Open Access Journals (Sweden)
Kaznessis Yiannis N
2006-02-01
Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users
Comparative study of different stochastic weather generators for long-term climate data simulation
Climate is one of the single most important factors affecting watershed ecosystems and water resources. The effect of climate variability and change has been studied extensively in some places; in many places, however, assessments are hampered by limited availability of long term continuous climate ...
Stochastic efficiency: five case studies
International Nuclear Information System (INIS)
Proesmans, Karel; Broeck, Christian Van den
2015-01-01
Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)
Stochastic optimization-based study of dimerization kinetics
Indian Academy of Sciences (India)
To this end, we study dimerization kinetics of protein as a model system. We follow the dimerization kinetics using a stochastic simulation algorithm and ... optimization; dimerization kinetics; sensitivity analysis; stochastic simulation ... tion in large molecules and clusters, or the design ..... An unbiased strategy of allocating.
Directory of Open Access Journals (Sweden)
Elston Timothy C
2004-03-01
Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.
Stochastic sensitivity analysis and Langevin simulation for neural network learning
International Nuclear Information System (INIS)
Koda, Masato
1997-01-01
A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.
Stochastic simulation of enzyme-catalyzed reactions with disparate timescales.
Barik, Debashis; Paul, Mark R; Baumann, William T; Cao, Yang; Tyson, John J
2008-10-01
Many physiological characteristics of living cells are regulated by protein interaction networks. Because the total numbers of these protein species can be small, molecular noise can have significant effects on the dynamical properties of a regulatory network. Computing these stochastic effects is made difficult by the large timescale separations typical of protein interactions (e.g., complex formation may occur in fractions of a second, whereas catalytic conversions may take minutes). Exact stochastic simulation may be very inefficient under these circumstances, and methods for speeding up the simulation without sacrificing accuracy have been widely studied. We show that the "total quasi-steady-state approximation" for enzyme-catalyzed reactions provides a useful framework for efficient and accurate stochastic simulations. The method is applied to three examples: a simple enzyme-catalyzed reaction where enzyme and substrate have comparable abundances, a Goldbeter-Koshland switch, where a kinase and phosphatase regulate the phosphorylation state of a common substrate, and coupled Goldbeter-Koshland switches that exhibit bistability. Simulations based on the total quasi-steady-state approximation accurately capture the steady-state probability distributions of all components of these reaction networks. In many respects, the approximation also faithfully reproduces time-dependent aspects of the fluctuations. The method is accurate even under conditions of poor timescale separation.
Stochastic Simulation of Cardiac Ventricular Myocyte Calcium Dynamics and Waves
Tuan, Hoang-Trong Minh; Williams, George S. B.; Chikando, Aristide C.; Sobie, Eric A.; Lederer, W. Jonathan; Jafri, M. Saleet
2011-01-01
A three dimensional model of calcium dynamics in the rat ventricular myocyte was developed to study the mechanism of calcium homeostasis and pathological calcium dynamics during calcium overload. The model contains 20,000 calcium release units (CRUs) each containing 49 ryanodine receptors. The model simulates calcium sparks with a realistic spontaneous calcium spark rate. It suggests that in addition to the calcium spark-based leak, there is an invisible calcium leak caused by the stochastic ...
Stochastic simulations of calcium contents in sugarcane area
Directory of Open Access Journals (Sweden)
Gener T. Pereira
2015-08-01
Full Text Available ABSTRACTThe aim of this study was to quantify and to map the spatial distribution and uncertainty of soil calcium (Ca content in a sugarcane area by sequential Gaussian and simulated-annealing simulation methods. The study was conducted in the municipality of Guariba, northeast of the São Paulo state. A sampling grid with 206 points separated by a distance of 50 m was established, totaling approximately 42 ha. The calcium contents were evaluated in layer of 0-0.20 m. Techniques of geostatistical estimation, ordinary kriging and stochastic simulations were used. The technique of ordinary kriging does not reproduce satisfactorily the global statistics of the Ca contents. The use of simulation techniques allows reproducing the spatial variability pattern of Ca contents. The techniques of sequential Gaussian simulation and simulated annealing showed significant variations in the contents of Ca in the small scale.
Stochastic simulations of the tetracycline operon
Directory of Open Access Journals (Sweden)
Kaznessis Yiannis N
2011-01-01
Full Text Available Abstract Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the
Stochastic simulations of the tetracycline operon
2011-01-01
Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the interplay between its molecular
Stochastic simulation of karst conduit networks
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José
2012-01-01
Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when
Stochastic simulation of regional groundwater flow in Beishan area
International Nuclear Information System (INIS)
Dong Yanhui; Li Guomin
2010-01-01
Because of the hydrogeological complexity, traditional thinking of aquifer characteristics is not appropriate for groundwater system in Beishan area. Uncertainty analysis of groundwater models is needed to examine the hydrologic effects of spatial heterogeneity. In this study, fast Fourier transform spectral method (FFTS) was used to generate the random horizontal permeability parameters. Depth decay and vertical anisotropy of hydraulic conductivity were included to build random permeability models. Based on high-performance computers, hundreds of groundwater flow models were simulated. Through stochastic simulations, the effect of heterogeneity to groundwater flow pattern was analyzed. (authors)
Energy Technology Data Exchange (ETDEWEB)
Bieda, Bogusław
2014-05-01
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.
MCdevelop - a universal framework for Stochastic Simulations
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
Energy Technology Data Exchange (ETDEWEB)
Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Simulation of nuclear plant operation into a stochastic energy production model
International Nuclear Information System (INIS)
Pacheco, R.L.
1983-04-01
A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt
Exact and Approximate Stochastic Simulation of Intracellular Calcium Dynamics
Directory of Open Access Journals (Sweden)
Nicolas Wieder
2011-01-01
pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
International Nuclear Information System (INIS)
Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing
2017-01-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)
Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.
2014-12-01
This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.
International Nuclear Information System (INIS)
Yu, L.; Li, Y.P.; Huang, G.H.
2016-01-01
In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.
Ahmet, Kara
2015-01-01
This paper presents a simple model of the provision of higher educational services that considers and exemplifies nonlinear, stochastic, and potentially chaotic processes. I use the methods of system dynamics to simulate these processes in the context of a particular sociologically interesting case, namely that of the Turkish higher education…
Stochastic simulation of nucleation in binary alloys
L’vov, P. E.; Svetukhin, V. V.
2018-06-01
In this study, we simulate nucleation in binary alloys with respect to thermal fluctuations of the alloy composition. The simulation is based on the Cahn–Hilliard–Cook equation. We have considered the influence of some fluctuation parameters (wave vector cutoff and noise amplitude) on the kinetics of nucleation and growth of minority phase precipitates. The obtained results are validated by the example of iron–chromium alloys.
National Research Council Canada - National Science Library
Frazier, John; Chusak, Yaroslav; Foy, Brent
2008-01-01
.... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...
Fast stochastic algorithm for simulating evolutionary population dynamics
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Stochastic models to simulate paratuberculosis in dairy herds
DEFF Research Database (Denmark)
Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad
2011-01-01
Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
SELANSI: a toolbox for simulation of stochastic gene regulatory networks.
Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A
2018-03-01
Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.
MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM
Directory of Open Access Journals (Sweden)
LIXIN LIU
2014-01-01
Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.
Hoang, Tuan L.; Nazarov, Roman; Kang, Changwoo; Fan, Jiangyuan
2018-07-01
Under the multi-ion irradiation conditions present in accelerated material-testing facilities or fission/fusion nuclear reactors, the combined effects of atomic displacements with radiation products may induce complex synergies in the structural materials. However, limited access to multi-ion irradiation facilities and the lack of computational models capable of simulating the evolution of complex defects and their synergies make it difficult to understand the actual physical processes taking place in the materials under these extreme conditions. In this paper, we propose the application of pulsed single/dual-beam irradiation as replacements for the expensive steady triple-beam irradiation to study radiation damages in materials under multi-ion irradiation.
Stochastic simulation of off-shore oil terminal systems
International Nuclear Information System (INIS)
Frankel, E.G.; Oberle, J.
1991-01-01
To cope with the problem of uncertainty and conditionality in the planning, design, and operation of offshore oil transshipment terminal systems, a conditional stochastic simulation approach is presented. Examples are shown, using SLAM II, a computer simulation language based on GERT, a conditional stochastic network analysis methodology in which use of resources such as time and money are expressed by the moment generating function of the statistics of the resource requirements. Similarly each activity has an associated conditional probability of being performed and/or of requiring some of the resources. The terminal system is realistically represented by modelling the statistics of arrivals, loading and unloading times, uncertainties in costs and availabilities, etc
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
Some simulation aspects, from molecular systems to stochastic geometries of pebble bed reactors
International Nuclear Information System (INIS)
Mazzolo, A.
2009-06-01
After a brief presentation of his teaching and supervising activities, the author gives an overview of his research activities: investigation of atoms under high intensity magnetic field (investigation of the electronic structure under these fields), studies of theoretical and numerical electrochemistry (simulation coupling molecular dynamics and quantum calculations, comprehensive simulations of molecular dynamics), and studies relating stochastic geometry and neutron science
An Exploration Algorithm for Stochastic Simulators Driven by Energy Gradients
Directory of Open Access Journals (Sweden)
Anastasia S. Georgiou
2017-06-01
Full Text Available In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric. In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system.
Simulating biological processes: stochastic physics from whole cells to colonies
Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida
2018-05-01
The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.
Stochastic Simulation Using @ Risk for Dairy Business Investment Decisions
A dynamic, stochastic, mechanistic simulation model of a dairy business was developed to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm system within a partial budgeting fram...
Analysing initial attack on wildland fires using stochastic simulation.
Jeremy S. Fried; J. Keith Gilless; James. Spero
2006-01-01
Stochastic simulation models of initial attack on wildland fire can be designed to reflect the complexity of the environmental, administrative, and institutional context in which wildland fire protection agencies operate, but such complexity may come at the cost of a considerable investment in data acquisition and management. This cost may be well justified when it...
Powering stochastic reliability models by discrete event simulation
DEFF Research Database (Denmark)
Kozine, Igor; Wang, Xiaoyun
2012-01-01
it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...
Stochastic simulation using @Risk for dairy business investment decisions
Bewley, J.D.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.
2010-01-01
Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm
Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession
Hong S. He; David J. Mladenoff
1999-01-01
Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization
Directory of Open Access Journals (Sweden)
Xuefeng Yan
2013-01-01
Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.
Stochastic Rotation Dynamics simulations of wetting multi-phase flows
Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin
2016-06-01
Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.
Directory of Open Access Journals (Sweden)
Ron Karl Hoeke
2015-09-01
Full Text Available Wind-wave contributions to tropical cyclone (TC-induced extreme sea levels are known to be significant in areas with narrow littoral zones, particularly at oceanic islands. Despite this, little information exists in many of these locations to assess the likelihood of inundation, the relative contribution of wind and wave setup to this inundation, and how it may change with sea level rise (SLR, particularly at scales relevant to coastal infrastructure. In this study, we explore TC-induced extreme sea levels at spatial scales on the order of tens of meters at Apia, the capitol of Samoa, a nation in the tropical South Pacific with typical high-island fringing reef morphology. Ensembles of stochastically generated TCs (based on historical information are combined with numerical simulations of wind waves, storm-surge, and wave setup to develop high-resolution statistical information on extreme sea levels and local contributions of wind setup and wave setup. The results indicate that storm track and local morphological details lead to local differences in extreme sea levels on the order of 1 m at spatial scales of less than 1 km. Wave setup is the overall largest contributor at most locations; however, wind setup may exceed wave setup in some sheltered bays. When an arbitrary SLR scenario (+1 m is introduced, overall extreme sea levels are found to modestly decrease relative to SLR, but wave energy near the shoreline greatly increases, consistent with a number of other recent studies. These differences have implications for coastal adaptation strategies.
New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.
Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu
2014-12-10
The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.
Stochastic simulation of destruction processes in self-irradiated materials
Directory of Open Access Journals (Sweden)
T. Patsahan
2017-09-01
Full Text Available Self-irradiation damages resulting from fission processes are common phenomena observed in nuclear fuel containing (NFC materials. Numerous α-decays lead to local structure transformations in NFC materials. The damages appearing due to the impacts of heavy nuclear recoils in the subsurface layer can cause detachments of material particles. Such a behaviour is similar to sputtering processes observed during a bombardment of the material surface by a flux of energetic particles. However, in the NFC material, the impacts are initiated from the bulk. In this work we propose a two-dimensional mesoscopic model to perform a stochastic simulation of the destruction processes occurring in a subsurface region of NFC material. We describe the erosion of the material surface, the evolution of its roughness and predict the detachment of the material particles. Size distributions of the emitted particles are obtained in this study. The simulation results of the model are in a qualitative agreement with the size histogram of particles produced from the material containing lava-like fuel formed during the Chernobyl nuclear power plant disaster.
HYDRASTAR - a code for stochastic simulation of groundwater flow
International Nuclear Information System (INIS)
Norman, S.
1992-05-01
The computer code HYDRASTAR was developed as a tool for groundwater flow and transport simulations in the SKB 91 safety analysis project. Its conceptual ideas can be traced back to a report by Shlomo Neuman in 1988, see the reference section. The main idea of the code is the treatment of the rock as a stochastic continuum which separates it from the deterministic methods previously employed by SKB and also from the discrete fracture models. The current report is a comprehensive description of HYDRASTAR including such topics as regularization or upscaling of a hydraulic conductivity field, unconditional and conditional simulation of stochastic processes, numerical solvers for the hydrology and streamline equations and finally some proposals for future developments
Stochastic simulations of normal aging and Werner's syndrome.
Qi, Qi
2014-04-26
Human cells typically consist of 23 pairs of chromosomes. Telomeres are repetitive sequences of DNA located at the ends of chromosomes. During cell replication, a number of basepairs are lost from the end of the chromosome and this shortening restricts the number of divisions that a cell can complete before it becomes senescent, or non-replicative. In this paper, we use Monte Carlo simulations to form a stochastic model of telomere shortening to investigate how telomere shortening affects normal aging. Using this model, we study various hypotheses for the way in which shortening occurs by comparing their impact on aging at the chromosome and cell levels. We consider different types of length-dependent loss and replication probabilities to describe these processes. After analyzing a simple model for a population of independent chromosomes, we simulate a population of cells in which each cell has 46 chromosomes and the shortest telomere governs the replicative potential of the cell. We generalize these simulations to Werner\\'s syndrome, a condition in which large sections of DNA are removed during cell division and, amongst other conditions, results in rapid aging. Since the mechanisms governing the loss of additional basepairs are not known, we use our model to simulate a variety of possible forms for the rate at which additional telomeres are lost per replication and several expressions for how the probability of cell division depends on telomere length. As well as the evolution of the mean telomere length, we consider the standard deviation and the shape of the distribution. We compare our results with a variety of data from the literature, covering both experimental data and previous models. We find good agreement for the evolution of telomere length when plotted against population doubling.
Stochastic and simulation models of maritime intercept operations capabilities
Sato, Hiroyuki
2005-01-01
The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB
Klingbeil, G.; Erban, R.; Giles, M.; Maini, P. K.
2011-01-01
Motivation: The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new
Mélykúti, Bence; Burrage, Kevin; Zygalakis, Konstantinos C.
2010-01-01
The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when
Simulation of anaerobic digestion processes using stochastic algorithm.
Palanichamy, Jegathambal; Palani, Sundarambal
2014-01-01
The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-01
even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Stochastic series expansion simulation of the t -V model
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Simulation of Stochastic Processes by Coupled ODE-PDE
Zak, Michail
2008-01-01
A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Hybrid framework for the simulation of stochastic chemical kinetics
International Nuclear Information System (INIS)
Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos
2016-01-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
Hybrid framework for the simulation of stochastic chemical kinetics
Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos
2016-12-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
Hybrid framework for the simulation of stochastic chemical kinetics
Energy Technology Data Exchange (ETDEWEB)
Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk [Department of Mathematics, Imperial College, South Kensington Campus, London, SW7 2AZ (United Kingdom); Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk [School of Mathematics, University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD (United Kingdom)
2016-12-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
Natural tracer test simulation by stochastic particle tracking method
International Nuclear Information System (INIS)
Ackerer, P.; Mose, R.; Semra, K.
1990-01-01
Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)
Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations
Energy Technology Data Exchange (ETDEWEB)
Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory
2008-01-01
We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.
Iacus, Stefano M
2018-01-01
The YUIMA package is the first comprehensive R framework based on S4 classes and methods which allows for the simulation of stochastic differential equations driven by Wiener process, Lévy processes or fractional Brownian motion, as well as CARMA processes. The package performs various central statistical analyses such as quasi maximum likelihood estimation, adaptive Bayes estimation, structural change point analysis, hypotheses testing, asynchronous covariance estimation, lead-lag estimation, LASSO model selection, and so on. YUIMA also supports stochastic numerical analysis by fast computation of the expected value of functionals of stochastic processes through automatic asymptotic expansion by means of the Malliavin calculus. All models can be multidimensional, multiparametric or non parametric.The book explains briefly the underlying theory for simulation and inference of several classes of stochastic processes and then presents both simulation experiments and applications to real data. Although these ...
Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.
Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young
2017-03-14
Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.
Stochastic simulation of ecohydrological interactions between vegetation and groundwater
Dwelle, M. C.; Ivanov, V. Y.; Sargsyan, K.
2017-12-01
The complex interactions between groundwater and vegetation in the Amazon rainforest may yield vital ecophysiological interactions in specific landscape niches such as buffering plant water stress during dry season or suppression of water uptake due to anoxic conditions. Representation of such processes is greatly impacted by both external and internal sources of uncertainty: inaccurate data and subjective choice of model representation. The models that can simulate these processes are complex and computationally expensive, and therefore make it difficult to address uncertainty using traditional methods. We use the ecohydrologic model tRIBS+VEGGIE and a novel uncertainty quantification framework applied to the ZF2 watershed near Manaus, Brazil. We showcase the capability of this framework for stochastic simulation of vegetation-hydrology dynamics. This framework is useful for simulation with internal and external stochasticity, but this work will focus on internal variability of groundwater depth distribution and model parameterizations. We demonstrate the capability of this framework to make inferences on uncertain states of groundwater depth from limited in situ data, and how the realizations of these inferences affect the ecohydrological interactions between groundwater dynamics and vegetation function. We place an emphasis on the probabilistic representation of quantities of interest and how this impacts the understanding and interpretation of the dynamics at the groundwater-vegetation interface.
Stochastic simulation of grain growth during continuous casting
Energy Technology Data Exchange (ETDEWEB)
Ramirez, A. [Department of Aerounatical Engineering, S.E.P.I., E.S.I.M.E., IPN, Instituto Politecnico Nacional (Unidad Profesional Ticoman), Av. Ticoman 600, Col. Ticoman, C.P.07340 (Mexico)]. E-mail: adalop123@mailbanamex.com; Carrillo, F. [Department of Processing Materials, CICATA-IPN Unidad Altamira Tamps (Mexico); Gonzalez, J.L. [Department of Metallurgy and Materials Engineering, E.S.I.Q.I.E.-IPN (Mexico); Lopez, S. [Department of Molecular Engineering of I.M.P., AP 14-805 (Mexico)
2006-04-15
The evolution of microstructure is a very important topic in material science engineering because the solidification conditions of steel billets during continuous casting process affect directly the properties of the final products. In this paper a mathematical model is described in order to simulate the dendritic growth using data of real casting operations; here a combination of deterministic and stochastic methods was used as a function of the solidification time of every node in order to create a reconstruction about the morphology of cast structures.
Stochastic simulation of grain growth during continuous casting
International Nuclear Information System (INIS)
Ramirez, A.; Carrillo, F.; Gonzalez, J.L.; Lopez, S.
2006-01-01
The evolution of microstructure is a very important topic in material science engineering because the solidification conditions of steel billets during continuous casting process affect directly the properties of the final products. In this paper a mathematical model is described in order to simulate the dendritic growth using data of real casting operations; here a combination of deterministic and stochastic methods was used as a function of the solidification time of every node in order to create a reconstruction about the morphology of cast structures
Stabilizing simulations of complex stochastic representations for quantum dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Perret, C; Petersen, W P, E-mail: wpp@math.ethz.ch [Seminar for Applied Mathematics, ETH, Zurich (Switzerland)
2011-03-04
Path integral representations of quantum dynamics can often be formulated as stochastic differential equations (SDEs). In a series of papers, Corney and Drummond (2004 Phys. Rev. Lett. 93 260401), Deuar and Drummond (2001 Comput. Phys. Commun. 142 442-5), Drummond and Gardnier (1980 J. Phys. A: Math. Gen. 13 2353-68), Gardiner and Zoller (2004 Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics (Springer Series in Synergetics) 3rd edn (Berlin: Springer)) and Gilchrist et al (1997 Phys. Rev. A 55 3014-32) and their collaborators have derived SDEs from coherent states representations for density matrices. Computationally, these SDEs are attractive because they seem simple to simulate. They can be quite unstable, however. In this paper, we consider some of the instabilities and propose a few remedies. Particularly, because the variances of the simulated paths typically grow exponentially, the processes become de-localized in relatively short times. Hence, the issues of boundary conditions and stable integration methods become important. We use the Bose-Einstein Hamiltonian as an example. Our results reveal that it is possible to significantly extend integration times and show the periodic structure of certain functionals.
Studies in the Control of Stochastic Systems
2017-10-31
control of continuous time stochastic systems with noise that is Brownian motions or fractional Brownian motions, the control of discrete time...in both continuous and discrete time. All of the above types of problems have been studied with the support of this grant. The achievement of these...scientists and engineers. 2. Math Awareness Months (MAM) (Every April for the past twenty-three years) Agenda: workshops each year for fifth
A Simulation-Based Dynamic Stochastic Route Choice Model for Evacuation
Directory of Open Access Journals (Sweden)
Xing Zhao
2012-01-01
Full Text Available This paper establishes a dynamic stochastic route choice model for evacuation to simulate the propagation process of traffic flow and estimate the stochastic route choice under evacuation situations. The model contains a lane-group-based cell transmission model (CTM which sets different traffic capacities for links with different turning movements to flow out in an evacuation situation, an actual impedance model which is to obtain the impedance of each route in time units at each time interval and a stochastic route choice model according to the probit-based stochastic user equilibrium. In this model, vehicles loading at each origin at each time interval are assumed to choose an evacuation route under determinate road network, signal design, and OD demand. As a case study, the proposed model is validated on the network nearby Nanjing Olympic Center after the opening ceremony of the 10th National Games of the People's Republic of China. The traffic volumes and clearing time at five exit points of the evacuation zone are calculated by the model to compare with survey data. The results show that this model can appropriately simulate the dynamic route choice and evolution process of the traffic flow on the network in an evacuation situation.
An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems
Kuwahara, Hiroyuki
2011-01-01
Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.
Weiss, Charles J.
2017-01-01
An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Stochastic simulation and robust design optimization of integrated photonic filters
Directory of Open Access Journals (Sweden)
Weng Tsui-Wei
2016-07-01
Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.
Numerical studies of the stochastic Korteweg-de Vries equation
International Nuclear Information System (INIS)
Lin Guang; Grinberg, Leopold; Karniadakis, George Em
2006-01-01
We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-07
Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Mavelli, Fabio; Ruiz-Mirazo, Kepa
2010-09-01
'ENVIRONMENT' is a computational platform that has been developed in the last few years with the aim to simulate stochastically the dynamics and stability of chemically reacting protocellular systems. Here we present and describe some of its main features, showing how the stochastic kinetics approach can be applied to study the time evolution of reaction networks in heterogeneous conditions, particularly when supramolecular lipid structures (micelles, vesicles, etc) coexist with aqueous domains. These conditions are of special relevance to understand the origins of cellular, self-reproducing compartments, in the context of prebiotic chemistry and evolution. We contrast our simulation results with real lab experiments, with the aim to bring together theoretical and experimental research on protocell and minimal artificial cell systems.
Green function simulation of Hamiltonian lattice models with stochastic reconfiguration
International Nuclear Information System (INIS)
Beccaria, M.
2000-01-01
We apply a recently proposed Green function Monte Carlo procedure to the study of Hamiltonian lattice gauge theories. This class of algorithms computes quantum vacuum expectation values by averaging over a set of suitable weighted random walkers. By means of a procedure called stochastic reconfiguration the long standing problem of keeping fixed the walker population without a priori knowledge of the ground state is completely solved. In the U(1) 2 model, which we choose as our theoretical laboratory, we evaluate the mean plaquette and the vacuum energy per plaquette. We find good agreement with previous works using model-dependent guiding functions for the random walkers. (orig.)
Bieda, Bogusław; Grzesik, Katarzyna
2017-11-01
The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB
Klingbeil, G.
2011-02-25
Motivation: The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. Results: The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user\\'s models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. © The Author 2011. Published by Oxford University Press. All rights reserved.
Quasi-continuous stochastic simulation framework for flood modelling
Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas
2017-04-01
Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Real option valuation of power transmission investments by stochastic simulation
International Nuclear Information System (INIS)
Pringles, Rolando; Olsina, Fernando; Garcés, Francisco
2015-01-01
Network expansions in power markets usually lead to investment decisions subject to substantial irreversibility and uncertainty. Hence, investors need valuing the flexibility to change decisions as uncertainty unfolds progressively. Real option analysis is an advanced valuation technique that enables planners to take advantage of market opportunities while preventing or mitigating losses if future conditions evolve unfavorably. In the past, many approaches for valuing real options have been developed. However, applying these methods to value transmission projects is often inappropriate as revenue cash flows are path-dependent and affected by a myriad of uncertain variables. In this work, a valuation technique based on stochastic simulation and recursive dynamic programming, called Least-Square Monte Carlo, is applied to properly value the deferral option in a transmission investment. The effect of option's maturity, the initial outlay and the capital cost upon the value of the postponement option is investigated. Finally, sensitivity analysis determines optimal decision regions to execute, postpone or reject the investment projects. - Highlights: • A modern investment appraisal method is applied to value power transmission projects. • The value of the option to postpone decision to invest in transmission projects is assessed. • Simulation methods are best suited for valuing real options in transmission investments
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2016-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we descr...
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new
A low-bias simulation scheme for the SABR stochastic volatility model
B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide
2012-01-01
htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)
Directory of Open Access Journals (Sweden)
Giorgos Minas
2017-07-01
Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.
Energy Technology Data Exchange (ETDEWEB)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation
Energy Technology Data Exchange (ETDEWEB)
Talukdar, Saifullah
2002-07-01
This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy
The two-regime method for optimizing stochastic reaction-diffusion simulations
Flegg, M. B.; Chapman, S. J.; Erban, R.
2011-01-01
Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches
Database of Nucleon-Nucleon Scattering Cross Sections by Stochastic Simulation, Phase I
National Aeronautics and Space Administration — A database of nucleon-nucleon elastic differential and total cross sections will be generated by stochastic simulation of the quantum Liouville equation in the...
A constrained approach to multiscale stochastic simulation of chemically reacting systems
Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek
2011-01-01
Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address
Drawert, Brian; Engblom, Stefan; Hellander, Andreas
2012-06-22
Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at
Parallel discrete-event simulation of FCFS stochastic queueing networks
Nicol, David M.
1988-01-01
Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, Marí a; Le Maitre, Olivier; Knio, Omar
2016-01-01
sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity
Project Evaluation and Cash Flow Forecasting by Stochastic Simulation
Directory of Open Access Journals (Sweden)
Odd A. Asbjørnsen
1983-10-01
Full Text Available The net present value of a discounted cash flow is used to evaluate projects. It is shown that the LaPlace transform of the cash flow time function is particularly useful when the cash flow profiles may be approximately described by ordinary linear differential equations in time. However, real cash flows are stochastic variables due to the stochastic nature of the disturbances during production.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
International Nuclear Information System (INIS)
Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.
2014-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy
International Nuclear Information System (INIS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-01-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Energy Technology Data Exchange (ETDEWEB)
Marchetti, Luca, E-mail: marchetti@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); University of Trento, Department of Mathematics (Italy); Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy)
2016-07-15
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Wang, Ting; Plecháč, Petr
2017-12-21
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Testing the new stochastic neutronic code ANET in simulating safety important parameters
International Nuclear Information System (INIS)
Xenofontos, T.; Delipei, G.-K.; Savva, P.; Varvayanni, M.; Maillard, J.; Silva, J.; Catsaros, N.
2017-01-01
Highlights: • ANET is a new neutronics stochastic code. • Criticality calculations in both subcritical and critical nuclear systems of conventional design were conducted. • Simulations of thermal, lower epithermal and fast neutron fluence rates were performed. • Axial fission rate distributions in standard and MOX fuel pins were computed. - Abstract: ANET (Advanced Neutronics with Evolution and Thermal hydraulic feedback) is an under development Monte Carlo code for simulating both GEN II/III reactors as well as innovative nuclear reactor designs, based on the high energy physics code GEANT3.21 of CERN. ANET is built through continuous GEANT3.21 applicability amplifications, comprising the simulation of particles’ transport and interaction in low energy along with the accessibility of user-provided libraries and tracking algorithms for energies below 20 MeV, as well as the simulation of elastic and inelastic collision, capture and fission. Successive testing applications performed throughout the ANET development have been utilized to verify the new code capabilities. In this context the ANET reliability in simulating certain reactor parameters important to safety is here examined. More specifically the reactor criticality as well as the neutron fluence and fission rates are benchmarked and validated. The Portuguese Research Reactor (RPI) after its conversion to low enrichment in U-235 and the OECD/NEA VENUS-2 MOX international benchmark were considered appropriate for the present study, the former providing criticality and neutron flux data and the latter reaction rates. Concerning criticality benchmarking, the subcritical, Training Nuclear Reactor of the Aristotle University of Thessaloniki (TNR-AUTh) was also analyzed. The obtained results are compared with experimental data from the critical infrastructures and with computations performed by two different, well established stochastic neutronics codes, i.e. TRIPOLI-4.8 and MCNP5. Satisfactory agreement
A primer on stochastic epidemic models: Formulation, numerical simulation, and analysis
Directory of Open Access Journals (Sweden)
Linda J.S. Allen
2017-05-01
Full Text Available Some mathematical methods for formulation and numerical simulation of stochastic epidemic models are presented. Specifically, models are formulated for continuous-time Markov chains and stochastic differential equations. Some well-known examples are used for illustration such as an SIR epidemic model and a host-vector malaria model. Analytical methods for approximating the probability of a disease outbreak are also discussed. Keywords: Branching process, Continuous-time Markov chain, Minor outbreak, Stochastic differential equation, 2000 MSC: 60H10, 60J28, 92D30
Simulation of the stochastic wave loads using a physical modeling approach
DEFF Research Database (Denmark)
Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.
2013-01-01
In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...
Directory of Open Access Journals (Sweden)
GERMÁN LOBOS
2015-12-01
Full Text Available ABSTRACT The traditional method of net present value (NPV to analyze the economic profitability of an investment (based on a deterministic approach does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L. production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
A comparative study of two stochastic mode reduction methods
Energy Technology Data Exchange (ETDEWEB)
Stinis, Panagiotis
2005-09-01
We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Stochastic Simulation of Soot Formation Evolution in Counterflow Diffusion Flames
Directory of Open Access Journals (Sweden)
Xiao Jiang
2018-01-01
Full Text Available Soot generally refers to carbonaceous particles formed during incomplete combustion of hydrocarbon fuels. A typical simulation of soot formation and evolution contains two parts: gas chemical kinetics, which models the chemical reaction from hydrocarbon fuels to soot precursors, that is, polycyclic aromatic hydrocarbons or PAHs, and soot dynamics, which models the soot formation from PAHs and evolution due to gas-soot and soot-soot interactions. In this study, two detailed gas kinetic mechanisms (ABF and KM2 have been compared during the simulation (using the solver Chemkin II of ethylene combustion in counterflow diffusion flames. Subsequently, the operator splitting Monte Carlo method is used to simulate the soot dynamics. Both the simulated data from the two mechanisms for gas and soot particles are compared with experimental data available in the literature. It is found that both mechanisms predict similar profiles for the gas temperature and velocity, agreeing well with measurements. However, KM2 mechanism provides much closer prediction compared to measurements for soot gas precursors. Furthermore, KM2 also shows much better predictions for soot number density and volume fraction than ABF. The effect of nozzle exit velocity on soot dynamics has also been investigated. Higher nozzle exit velocity renders shorter residence time for soot particles, which reduces the soot number density and volume fraction accordingly.
STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K
2011-04-15
The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.
A higher-order numerical framework for stochastic simulation of chemical reaction systems.
Székely, Tamás
2012-07-15
BACKGROUND: In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. RESULTS: By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. CONCLUSIONS: Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved.
Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K.
2012-01-01
We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system's size. © 2006 IEEE.
Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.
Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically-numerical tec......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically...
Stochastic simulation of PWR vessel integrity for pressurized thermal shock conditions
International Nuclear Information System (INIS)
Jackson, P.S.; Moelling, D.S.
1984-01-01
A stochastic simulation methodology is presented for performing probabilistic analyses of Pressurized Water Reactor vessel integrity. Application of the methodology to vessel-specific integrity analyses is described in the context of Pressurized Thermal Shock (PTS) conditions. A Bayesian method is described for developing vessel-specific models of the density of undetected volumetric flaws from ultrasonic inservice inspection results. Uncertainty limits on the probabilistic results due to sampling errors are determined from the results of the stochastic simulation. An example is provided to illustrate the methodology
Klingbeil, Guido
2012-02-01
We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.
Simulation of conditional diffusions via forward-reverse stochastic representations
Bayer, Christian
2015-01-01
We derive stochastic representations for the finite dimensional distributions of a multidimensional diffusion on a fixed time interval,conditioned on the terminal state. The conditioning can be with respect to a fixed measurement point or more generally with respect to some subset. The representations rely on a reverse process connected with the given (forward) diffusion as introduced by Milstein, Schoenmakers and Spokoiny in the context of density estimation. The corresponding Monte Carlo estimators have essentially root-N accuracy, and hence they do not suffer from the curse of dimensionality. We also present an application in statistics, in the context of the EM algorithm.
Simulation of conditional diffusions via forward-reverse stochastic representations
Bayer, Christian
2015-01-07
We derive stochastic representations for the finite dimensional distributions of a multidimensional diffusion on a fixed time interval,conditioned on the terminal state. The conditioning can be with respect to a fixed measurement point or more generally with respect to some subset. The representations rely on a reverse process connected with the given (forward) diffusion as introduced by Milstein, Schoenmakers and Spokoiny in the context of density estimation. The corresponding Monte Carlo estimators have essentially root-N accuracy, and hence they do not suffer from the curse of dimensionality. We also present an application in statistics, in the context of the EM algorithm.
Neural network stochastic simulation applied for quantifying uncertainties
Directory of Open Access Journals (Sweden)
N Foudil-Bey
2016-09-01
Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
Liang, Faming
2014-04-03
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.
A constrained approach to multiscale stochastic simulation of chemically reacting systems
Cotter, Simon L.
2011-01-01
Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.
Kolars, Kelsey A.; Vecchia, Aldo V.; Ryberg, Karen R.
2016-02-24
The Souris River Basin is a 61,000-square-kilometer basin in the Provinces of Saskatchewan and Manitoba and the State of North Dakota. In May and June of 2011, record-setting rains were seen in the headwater areas of the basin. Emergency spillways of major reservoirs were discharging at full or nearly full capacity, and extensive flooding was seen in numerous downstream communities. To determine the probability of future extreme floods and droughts, the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, developed a stochastic model for simulating Souris River Basin precipitation, evapotranspiration, and natural (unregulated) streamflow. Simulations from the model can be used in future studies to simulate regulated streamflow, design levees, and other structures; and to complete economic cost/benefit analyses.Long-term climatic variability was analyzed using tree-ring chronologies to hindcast precipitation to the early 1700s and compare recent wet and dry conditions to earlier extreme conditions. The extended precipitation record was consistent with findings from the Devils Lake and Red River of the North Basins (southeast of the Souris River Basin), supporting the idea that regional climatic patterns for many centuries have consisted of alternating wet and dry climate states.A stochastic climate simulation model for precipitation, temperature, and potential evapotranspiration for the Souris River Basin was developed using recorded meteorological data and extended precipitation records provided through tree-ring analysis. A significant climate transition was seen around1970, with 1912–69 representing a dry climate state and 1970–2011 representing a wet climate state. Although there were some distinct subpatterns within the basin, the predominant differences between the two states were higher spring through early fall precipitation and higher spring potential evapotranspiration for the wet compared to the dry state.A water
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist.
Directory of Open Access Journals (Sweden)
Brian Drawert
2016-12-01
Full Text Available We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP
DEFF Research Database (Denmark)
Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang
2016-01-01
This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...
Bewley, J.M.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.
2010-01-01
Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Explicit calibration and simulation of stochastic fields by low-order ARMA processes
DEFF Research Database (Denmark)
Krenk, Steen
2011-01-01
A simple framework for autoregressive simulation of stochastic fields is presented. The autoregressive format leads to a simple exponential correlation structure in the time-dimension. In the case of scalar processes a more detailed correlation structure can be obtained by adding memory...... to the process via an extension to autoregressive moving average (ARMA) processes. The ARMA format incorporates a more detailed correlation structure by including previous values of the simulated process. Alternatively, a more detailed correlation structure can be obtained by including additional 'state......-space' variables in the simulation. For a scalar process this would imply an increase of the dimension of the process to be simulated. In the case of a stochastic field the correlation in the time-dimension is represented, although indirectly, in the simultaneous spatial correlation. The model with the shortest...
Simulation of quantum dynamics based on the quantum stochastic differential equation.
Li, Ming
2013-01-01
The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L.) PASTURES IN MARIN, N. L., MEXICO
JosÃ© Romualdo MartÃnez-LÃ³pez; Erasmo Gutierrez-Ornelas; Miguel Angel Barrera-Silva; Rafael Retes-LÃ³pez
2014-01-01
A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L.) and its dry matter intake by cattle, in MarÃn, NL, MÃ©xico. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of...
Energy Technology Data Exchange (ETDEWEB)
Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)
2010-07-01
Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)
GillespieSSA: Implementing the Gillespie Stochastic Simulation Algorithm in R
Directory of Open Access Journals (Sweden)
Mario Pineda-Krch
2008-02-01
Full Text Available The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. This article presents a simple-to-use and flexible framework for implementing the SSA using the high-level statistical computing language R and the package GillespieSSA. Using three ecological models as examples (logistic growth, Rosenzweig-MacArthur predator-prey model, and Kermack-McKendrick SIRS metapopulation model, this paper shows how a deterministic model can be formulated as a finite-population stochastic model within the framework of SSA theory and how it can be implemented in R. Simulations of the stochastic models are performed using four different SSA Monte Carlo methods: one exact method (Gillespie's direct method; and three approximate methods (explicit, binomial, and optimized tau-leap methods. Comparison of simulation results confirms that while the time-evolution trajectories obtained from the different SSA methods are indistinguishable, the approximate methods are up to four orders of magnitude faster than the exact methods.
Stochastic stresses in granular matter simulated by dripping identical ellipses into plane silo
DEFF Research Database (Denmark)
Berntsen, Kasper Nikolaj; Ditlevsen, Ove Dalager
2000-01-01
A two-dimensional silo pressure model-problem is investigated by molecular dynamics simulations. A plane silo container is filled by a granular matter consisting of congruent elliptic particles dropped one by one into the silo. A suitable energy absorbing contact force mechanism is activatedduring...... the granular matter in the silo are compared to thesolution of a stochastic equilibrium differential equation. In this equation the stochasticity source is a homogeneouswhite noise gamma-distributed side pressure factor field along the walls. This is a generalization of the deterministic side pressure factor...... proposed by Janssen in 1895. The stochastic Janssen factor model is shown to be fairly consistentwith the observations from which the mean and the intensity of the white noise is estimated by the method of maximumlikelihood using the properties of the gamma-distribution. Two wall friction coefficients...
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.; Vejchodský , Tomá š; Erban, Radek
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations
Directory of Open Access Journals (Sweden)
Florin-Catalin ENACHE
2015-10-01
Full Text Available The growing character of the cloud business has manifested exponentially in the last 5 years. The capacity managers need to concentrate on a practical way to simulate the random demands a cloud infrastructure could face, even if there are not too many mathematical tools to simulate such demands.This paper presents an introduction into the most important stochastic processes and queueing theory concepts used for modeling computer performance. Moreover, it shows the cases where such concepts are applicable and when not, using clear programming examples on how to simulate a queue, and how to use and validate a simulation, when there are no mathematical concepts to back it up.
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual
Energy Technology Data Exchange (ETDEWEB)
Dunn, Aaron [Sandia National Laboratories, Albuquerque, 87185 NM (United States); George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, 30332 GA (United States); Muntifering, Brittany [Sandia National Laboratories, Albuquerque, 87185 NM (United States); Northwestern University, Chicago, 60208 IL (United States); Dingreville, Rémi; Hattar, Khalid [Sandia National Laboratories, Albuquerque, 87185 NM (United States); Capolungo, Laurent, E-mail: laurent@lanl.gov [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, 30332 GA (United States); Material Science and Technology Division, MST-8, Los Alamos National Laboratory, Los Alamos, 87545 NM (United States)
2016-11-15
Charged particle irradiation is a frequently used experimental tool to study damage accumulation in metals expected during neutron irradiation. Understanding the correspondence between displacement rate and temperature during such studies is one of several factors that must be taken into account in order to design experiments that produce equivalent damage accumulation to neutron damage conditions. In this study, spatially resolved stochastic cluster dynamics (SRSCD) is used to simulate damage evolution in α-Fe and find displacement rate/temperature pairs under ‘target’ and ‘proxy’ conditions for which the local distribution of vacancies and vacancy clusters is the same as a function of displacement damage. The SRSCD methodology is chosen for this study due to its computational efficiency and ability to simulate damage accumulation in spatially inhomogeneous materials such as thin films. Results are presented for Frenkel pair irradiation and displacement cascade damage in thin films and bulk α-Fe. Holding all other material and irradiation conditions constant, temperature adjustments are shown to successfully make up for changes in displacement rate such that defect concentrations and cluster sizes remain relatively constant. The methodology presented in this study allows for a first-order prediction of the temperature at which ion irradiation experiments (‘proxy’ conditions) should take place in order to approximate neutron irradiation (‘target’ conditions).
Stochastic self-propagating star formation in three-dimensional disk galaxy simulations
International Nuclear Information System (INIS)
Statler, T.; Comins, N.; Smith, B.F.
1983-01-01
Stochastic self-propagating star formation (SSPSF) is a process of forming new stars through the compression of the interstellar medium by supernova shock waves. Coupling this activity with galactic differential rotation produces spiral structure in two-dimensional disk galaxy simulations. In this paper the first results of a three-dimensional SSPSF simulation of disk galaxies are reported. Our model generates less impressive spirals than do the two-dimensional simulations. Although some spirals do appear in equilibrium, more frequently we observe spirals as non-equilibrium states of the models: as the spiral arms evolve, they widen until the spiral structure is no longer discernible. The two free parameters that we vary in this study are the probability of star formation due to a recent, nearby explosion, and the relaxation time for the interstellar medium to return to a condition of maximum star formation after it has been cleared out by an explosion and subsequent star formation. We find that equilibrium spiral structure is formed over a much smaller range of these parameters in our three-dimensional SSPSF models than in similar two-dimensional models. We discuss possible reasons for these results as well as improvements on the model which are being explored
A framework for stochastic simulation of distribution practices for hotel reservations
Energy Technology Data Exchange (ETDEWEB)
Halkos, George E.; Tsilika, Kyriaki D. [Laboratory of Operations Research, Department of Economics, University of Thessaly, Korai 43, 38 333, Volos (Greece)
2015-03-10
The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system.
A framework for stochastic simulation of distribution practices for hotel reservations
International Nuclear Information System (INIS)
Halkos, George E.; Tsilika, Kyriaki D.
2015-01-01
The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system
International Nuclear Information System (INIS)
Babykina, Génia; Brînzei, Nicolae; Aubry, Jean-François; Deleuze, Gilles
2016-01-01
The paper proposes a modeling framework to support Monte Carlo simulations of the behavior of a complex industrial system. The aim is to analyze the system dependability in the presence of random events, described by any type of probability distributions. Continuous dynamic evolutions of physical parameters are taken into account by a system of differential equations. Dynamic reliability is chosen as theoretical framework. Based on finite state automata theory, the formal model is built by parallel composition of elementary sub-models using a bottom-up approach. Considerations of a stochastic nature lead to a model called the Stochastic Hybrid Automaton. The Scilab/Scicos open source environment is used for implementation. The case study is carried out on an example of a steam generator of a nuclear power plant. The behavior of the system is studied by exploring its trajectories. Possible system trajectories are analyzed both empirically, using the results of Monte Carlo simulations, and analytically, using the formal system model. The obtained results are show to be relevant. The Stochastic Hybrid Automaton appears to be a suitable tool to address the dynamic reliability problem and to model real systems of high complexity; the bottom-up design provides precision and coherency of the system model. - Highlights: • A part of a nuclear power plant is modeled in the context of dynamic reliability. • Stochastic Hybrid Automaton is used as an input model for Monte Carlo simulations. • The model is formally built using a bottom-up approach. • The behavior of the system is analyzed empirically and analytically. • A formally built SHA shows to be a suitable tool to approach dynamic reliability.
An adaptive algorithm for simulation of stochastic reaction-diffusion processes
International Nuclear Information System (INIS)
Ferm, Lars; Hellander, Andreas; Loetstedt, Per
2010-01-01
We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.
D-leaping: Accelerating stochastic simulation algorithms for reactions with delays
International Nuclear Information System (INIS)
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2009-01-01
We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Stochastic annealing simulations of defect interactions among subcascades
Energy Technology Data Exchange (ETDEWEB)
Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Singh, B.N.
1997-04-01
The effects of the subcascade structure of high energy cascades on the temperature dependencies of annihilation, clustering and free defect production are investigated. The subcascade structure is simulated by closely spaced groups of lower energy MD cascades. The simulation results illustrate the strong influence of the defect configuration existing in the primary damage state on subsequent intracascade evolution. Other significant factors affecting the evolution of the defect distribution are the large differences in mobility and stability of vacancy and interstitial defects and the rapid one-dimensional diffusion of small, glissile interstitial loops produced directly in cascades. Annealing simulations are also performed on high-energy, subcascade-producing cascades generated with the binary collision approximation and calibrated to MD results.
A fire management simulation model using stochastic arrival times
Eric L. Smith
1987-01-01
Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...
Directory of Open Access Journals (Sweden)
Mark D McDonnell
2013-05-01
Full Text Available The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of action-potential arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic action potential, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.
Katsoulakis, Markos A.; Vlachos, Dionisios G.
2003-11-01
We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.
Dimension reduction of Karhunen-Loeve expansion for simulation of stochastic processes
Liu, Zhangjun; Liu, Zixin; Peng, Yongbo
2017-11-01
Conventional Karhunen-Loeve expansions for simulation of stochastic processes often encounter the challenge of dealing with hundreds of random variables. For breaking through the barrier, a random function embedded Karhunen-Loeve expansion method is proposed in this paper. The updated scheme has a similar form to the conventional Karhunen-Loeve expansion, both involving a summation of a series of deterministic orthonormal basis and uncorrelated random variables. While the difference from the updated scheme lies in the dimension reduction of Karhunen-Loeve expansion through introducing random functions as a conditional constraint upon uncorrelated random variables. The random function is expressed as a single-elementary-random-variable orthogonal function in polynomial format (non-Gaussian variables) or trigonometric format (non-Gaussian and Gaussian variables). For illustrative purposes, the simulation of seismic ground motion is carried out using the updated scheme. Numerical investigations reveal that the Karhunen-Loeve expansion with random functions could gain desirable simulation results in case of a moderate sample number, except the Hermite polynomials and the Laguerre polynomials. It has the sound applicability and efficiency in simulation of stochastic processes. Besides, the updated scheme has the benefit of integrating with probability density evolution method, readily for the stochastic analysis of nonlinear structures.
arXiv Stochastic locality and master-field simulations of very large lattices
Lüscher, Martin
2018-01-01
In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.
FluTE, a publicly available stochastic influenza epidemic simulation model.
Directory of Open Access Journals (Sweden)
Dennis L Chao
2010-01-01
Full Text Available Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2 and 2009 pandemic A(H1N1 influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.
A conditional stochastic weather generator for seasonal to multi-decadal simulations
Verdin, Andrew; Rajagopalan, Balaji; Kleiber, William; Podestá, Guillermo; Bert, Federico
2018-01-01
We present the application of a parametric stochastic weather generator within a nonstationary context, enabling simulations of weather sequences conditioned on interannual and multi-decadal trends. The generalized linear model framework of the weather generator allows any number of covariates to be included, such as large-scale climate indices, local climate information, seasonal precipitation and temperature, among others. Here we focus on the Salado A basin of the Argentine Pampas as a case study, but the methodology is portable to any region. We include domain-averaged (e.g., areal) seasonal total precipitation and mean maximum and minimum temperatures as covariates for conditional simulation. Areal covariates are motivated by a principal component analysis that indicates the seasonal spatial average is the dominant mode of variability across the domain. We find this modification to be effective in capturing the nonstationarity prevalent in interseasonal precipitation and temperature data. We further illustrate the ability of this weather generator to act as a spatiotemporal downscaler of seasonal forecasts and multidecadal projections, both of which are generally of coarse resolution.
FluTE, a publicly available stochastic influenza epidemic simulation model.
Chao, Dennis L; Halloran, M Elizabeth; Obenchain, Valerie J; Longini, Ira M
2010-01-29
Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
DEFF Research Database (Denmark)
Foddai, Alessandro; Enøe, Claes; Krogh, Kaspar
2014-01-01
A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence of seroco......A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence......, which was the most efficient ELISA, could detect antibodiesin the BTM of a large herd 280 days (95% prediction interval: 218; 568) after a transientlyinfected (TI) milking cow has been introduced into the herd. The estimated time to detectionafter introduction of one PI calf was 111 days (44; 605...
Directory of Open Access Journals (Sweden)
Ryota Mori
2015-01-01
Full Text Available Airport congestion, in particular congestion of departure aircraft, has already been discussed by other researches. Most solutions, though, fail to account for uncertainties. Since it is difficult to remove uncertainties of the operations in the real world, a strategy should be developed assuming such uncertainties exist. Therefore, this research develops a fast-time stochastic simulation model used to validate various methods in order to decrease airport congestion level under existing uncertainties. The surface movement data is analyzed first, and the uncertainty level is obtained. Next, based on the result of data analysis, the stochastic simulation model is developed. The model is validated statistically and the characteristics of airport operation under existing uncertainties are investigated.
Energy Technology Data Exchange (ETDEWEB)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan); Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610 (Belgium); Chen, W. [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan)
2016-08-07
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Directory of Open Access Journals (Sweden)
Danilo ePezo
2014-11-01
Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
International Nuclear Information System (INIS)
Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.
2006-01-01
This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties
A stochastic six-degree-of-freedom flight simulator for passively controlled high power rockets
Box, Simon; Bishop, Christopher M.; Hunt, Hugh
2011-01-01
This paper presents a method for simulating the flight of a passively controlled rocket in six degrees of freedom, and the descent under parachute in three degrees of freedom, Also presented is a method for modelling the uncertainty in both the rocket dynamics and the atmospheric conditions using stochastic parameters and the Monte-Carlo method. Included within this we present a method for quantifying the uncertainty in the atmospheric conditions using historical atmospheric data. The core si...
International Nuclear Information System (INIS)
Eriksson, L.O.; Oppelstrup, J.
1994-12-01
A simulator for 2D stochastic continuum simulation and inverse modelling of groundwater flow has been developed. The simulator is well suited for method evaluation and what-if simulation and written in MATLAB. Conductivity fields are generated by unconditional simulation, conditional simulation on measured conductivities and calibration on both steady-state head measurements and transient head histories. The fields can also include fracture zones and zones with different mean conductivities. Statistics of conductivity fields and particle travel times are recorded in Monte-Carlo simulations. The calibration uses the pilot point technique, an inverse technique proposed by RamaRao and LaVenue. Several Kriging procedures are implemented, among others Kriging neighborhoods. In cases where the expectation of the log-conductivity in the truth field is known the nonbias conditions can be omitted, which will make the variance in the conditionally simulated conductivity fields smaller. A simulation experiment, resembling the initial stages of a site investigation and devised in collaboration with SKB, is performed and interpreted. The results obtained in the present study show less uncertainty than in our preceding study. This is mainly due to the modification of the Kriging procedure but also to the use of more data. Still the large uncertainty in cases of sparse data is apparent. The variogram represents essential characteristics of the conductivity field. Thus, even unconditional simulations take account of important information. Significant improvements in variance by further conditioning will be obtained only as the number of data becomes much larger. 16 refs, 26 figs
Energy Technology Data Exchange (ETDEWEB)
Eriksson, L O; Oppelstrup, J [Starprog AB (Sweden)
1994-12-01
A simulator for 2D stochastic continuum simulation and inverse modelling of groundwater flow has been developed. The simulator is well suited for method evaluation and what-if simulation and written in MATLAB. Conductivity fields are generated by unconditional simulation, conditional simulation on measured conductivities and calibration on both steady-state head measurements and transient head histories. The fields can also include fracture zones and zones with different mean conductivities. Statistics of conductivity fields and particle travel times are recorded in Monte-Carlo simulations. The calibration uses the pilot point technique, an inverse technique proposed by RamaRao and LaVenue. Several Kriging procedures are implemented, among others Kriging neighborhoods. In cases where the expectation of the log-conductivity in the truth field is known the nonbias conditions can be omitted, which will make the variance in the conditionally simulated conductivity fields smaller. A simulation experiment, resembling the initial stages of a site investigation and devised in collaboration with SKB, is performed and interpreted. The results obtained in the present study show less uncertainty than in our preceding study. This is mainly due to the modification of the Kriging procedure but also to the use of more data. Still the large uncertainty in cases of sparse data is apparent. The variogram represents essential characteristics of the conductivity field. Thus, even unconditional simulations take account of important information. Significant improvements in variance by further conditioning will be obtained only as the number of data becomes much larger. 16 refs, 26 figs.
Dodov, B.
2017-12-01
Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon
Research on neutron noise analysis stochastic simulation method for α calculation
International Nuclear Information System (INIS)
Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang
2014-01-01
The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)
Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations
Curran, Thomas; Denner, Fabian; van Wachem, Berend
2017-11-01
The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.
Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M
2003-01-01
Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.
Goderniaux, Pascal; Brouyère, Serge; Blenkinsop, Stephen; Burton, Aidan; Fowler, Hayley; Dassargues, Alain
2010-05-01
The evaluation of climate change impact on groundwater reserves represents a difficult task because both hydrological and climatic processes are complex and difficult to model. In this study, we present an innovative methodology that combines the use of integrated surface - subsurface hydrological models with advanced stochastic transient climate change scenarios. This methodology is applied to the Geer basin (480 km²) in Belgium, which is intensively exploited to supply the city of Liège (Belgium) with drinking water. The physically-based, spatially-distributed, surface-subsurface flow model has been developed with the finite element model HydroGeoSphere . The simultaneous solution of surface and subsurface flow equations in HydroGeoSphere, as well as the internal calculation of the actual evapotranspiration as a function of the soil moisture at each node of the evaporative zone, enables a better representation of interconnected processes in all domains of the catchment (fully saturated zone, partially saturated zone, surface). Additionally, the use of both surface and subsurface observed data to calibrate the model better constrains the calibration of the different water balance terms. Crucially, in the context of climate change impacts on groundwater resources, the evaluation of groundwater recharge is improved. . This surface-subsurface flow model is combined with advanced climate change scenarios for the Geer basin. Climate change simulations were obtained from six regional climate model (RCM) scenarios assuming the SRES A2 greenhouse gases emission (medium-high) scenario. These RCM scenarios were statistically downscaled using a transient stochastic weather generator technique, combining 'RainSim' and the 'CRU weather generator' for temperature and evapotranspiration time series. This downscaling technique exhibits three advantages compared with the 'delta change' method usually used in groundwater impact studies. (1) Corrections to climate model output are
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
International Nuclear Information System (INIS)
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2016-01-01
The uncertainties of renewable energy have brought great challenges to power system commitment, dispatches and reserve requirement. This paper presents a comparative study on integration of renewable generation uncertainties into SCUC (stochastic security-constrained unit commitment) considering reserve and risk. Renewable forecast uncertainties are captured by a list of PIs (prediction intervals). A new scenario generation method is proposed to generate scenarios from these PIs. Different system uncertainties are considered as scenarios in the stochastic SCUC problem formulation. Two comparative simulations with single (E1: wind only) and multiple sources of uncertainty (E2: load, wind, solar and generation outages) are investigated. Five deterministic and four stochastic case studies are performed. Different generation costs, reserve strategies and associated risks are compared under various scenarios. Demonstrated results indicate the overall costs of E2 is lower than E1 due to penetration of solar power and the associated risk in deterministic cases of E2 is higher than E1. It implies the superimposed effect of uncertainties during uncertainty integration. The results also demonstrate that power systems run a higher level of risk during peak load hours, and that stochastic models are more robust than deterministic ones. - Highlights: • An extensive comparative study for renewable integration is presented. • A novel scenario generation method is proposed. • Wind and solar uncertainties are represented by a list of prediction intervals. • Unit commitment and dispatch costs are discussed considering reserve and risk.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro
2016-01-01
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
Simulating local measurements on a quantum many-body system with stochastic matrix product states
DEFF Research Database (Denmark)
Gammelmark, Søren; Mølmer, Klaus
2010-01-01
We demonstrate how to simulate both discrete and continuous stochastic evolutions of a quantum many-body system subject to measurements using matrix product states. A particular, but generally applicable, measurement model is analyzed and a simple representation in terms of matrix product operators...... is found. The technique is exemplified by numerical simulations of the antiferromagnetic Heisenberg spin-chain model subject to various instances of the measurement model. In particular, we focus on local measurements with small support and nonlocal measurements, which induce long-range correlations....
International Nuclear Information System (INIS)
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas
2014-01-01
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation
A stochastic model for the simulation of wind turbine blades in static stall
DEFF Research Database (Denmark)
Bertagnolio, Franck; Rasmussen, Flemming; Sørensen, Niels N.
2010-01-01
The aim of this work is to improve aeroelastic simulation codes by accounting for the unsteady aerodynamic forces that a blade experiences in static stall. A model based on a spectral representation of the aerodynamic lift force is defined. The drag and pitching moment are derived using...... a conditional simulation technique for stochastic processes. The input data for the model can be collected either from measurements or from numerical results from a Computational Fluid Dynamics code for airfoil sections at constant angles of attack. An analysis of such data is provided, which helps to determine...
Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays
Energy Technology Data Exchange (ETDEWEB)
Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento (Italy)
2014-10-07
We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.
Energy Technology Data Exchange (ETDEWEB)
Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)
2014-12-10
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
Stochastic simulations of normal aging and Werner's syndrome.
Qi, Qi; Wattis, Jonathan A D; Byrne, Helen M
2014-01-01
aging. Using this model, we study various hypotheses for the way in which shortening occurs by comparing their impact on aging at the chromosome and cell levels. We consider different types of length-dependent loss and replication probabilities
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S
2018-06-21
The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Stochastic models for the in silico simulation of synaptic processes
Bracciali, Andrea; Brunelli, Marcello; Cataldo, Enrico; Degano, Pierpaolo
2008-01-01
Background Research in life sciences is benefiting from a large availability of formal description techniques and analysis methodologies. These allow both the phenomena investigated to be precisely modeled and virtual experiments to be performed in silico. Such experiments may result in easier, faster, and satisfying approximations of their in vitro/vivo counterparts. A promising approach is represented by the study of biological phenomena as a collection of interactive entities through proce...
Zachar, István; Fedor, Anna; Szathmáry, Eörs
2011-01-01
The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix. PMID:21818258
Zachar, István; Fedor, Anna; Szathmáry, Eörs
2011-01-01
The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.
Directory of Open Access Journals (Sweden)
István Zachar
Full Text Available The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.
Moraes, Alvaro
2015-01-01
Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference
The two-regime method for optimizing stochastic reaction-diffusion simulations
Flegg, M. B.
2011-10-19
Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
International Nuclear Information System (INIS)
Woo, Mingko; Lonergan, S.
1990-01-01
Winter roads constitute an important part of the transportation network in the MacKenzie Delta, the Yellowknife area, and between the MacKenzie Highway and the Canol Road. Climatic changes in the MacKenzie Valley will alter the probabilities of ice cover thickness and duration, impacting on the periods when ice road river crossings are viable. Stochastic models were developed to generate air temperature and precipitation data to analyze climate impacts on when ice road crossing of the MacKenzie River at Norman Wells is feasible. The data were employed to simulate river ice growth and decay. Several general circulation models were employed to determine the impacts of climatic change on the ice regime. For precipitation simulation, the occurrence of wet or dry days was determined from Markov chain transition probabilities. In general, the Goddard Institute of Space Studies (GISS) model predicted the largest increase in monthly precipitation and the Oregon State University (OSU) model predicted the least changes. The various scenarios indicated that the duration for vehicular traffic over ice will be significantly reduced, compared to present day Norman Wells ice crossing operation. For 20 tonne vehicles, the current duration for safe crossing averages 169±14.6 days per year, while for the OSU scenario it will be reduced to 148±14.7 days, is further reduced to 127±24.9 days for the GISS scenario, and drops to 122±21.7 days for the GFDL (General Fluid Dynamics Laboratory) scenario. 5 refs., 1 fig
Multiscale study on stochastic reconstructions of shale samples
Lili, J.; Lin, M.; Jiang, W. B.
2016-12-01
Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Modeling and simulating the adaptive electrical properties of stochastic polymeric 3D networks
International Nuclear Information System (INIS)
Sigala, R; Smerieri, A; Camorani, P; Schüz, A; Erokhin, V
2013-01-01
Memristors are passive two-terminal circuit elements that combine resistance and memory. Although in theory memristors are a very promising approach to fabricate hardware with adaptive properties, there are only very few implementations able to show their basic properties. We recently developed stochastic polymeric matrices with a functionality that evidences the formation of self-assembled three-dimensional (3D) networks of memristors. We demonstrated that those networks show the typical hysteretic behavior observed in the ‘one input-one output’ memristive configuration. Interestingly, using different protocols to electrically stimulate the networks, we also observed that their adaptive properties are similar to those present in the nervous system. Here, we model and simulate the electrical properties of these self-assembled polymeric networks of memristors, the topology of which is defined stochastically. First, we show that the model recreates the hysteretic behavior observed in the real experiments. Second, we demonstrate that the networks modeled indeed have a 3D instead of a planar functionality. Finally, we show that the adaptive properties of the networks depend on their connectivity pattern. Our model was able to replicate fundamental qualitative behavior of the real organic 3D memristor networks; yet, through the simulations, we also explored other interesting properties, such as the relation between connectivity patterns and adaptive properties. Our model and simulations represent an interesting tool to understand the very complex behavior of self-assembled memristor networks, which can finally help to predict and formulate hypotheses for future experiments. (paper)
Application of users’ light-switch stochastic models to dynamic energy simulation
DEFF Research Database (Denmark)
Camisassi, V.; Fabi, V.; Andersen, Rune Korsholm
2015-01-01
deterministic inputs, due to the uncertain nature of human behaviour. In this paper, new stochastic models of users’ interaction with artificial lighting systems are developed and implemented in the energy simulation software IDA ICE. They were developed from field measurements in an office building in Prague......The design of an innovative building should include building overall energy flows estimation. They are principally related to main six influencing factors (IEA-ECB Annex 53): climate, building envelope and equipment, operation and maintenance, occupant behaviour and indoor environment conditions...
Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna
2016-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
International Nuclear Information System (INIS)
Kaplani, E.; Kaplanis, S.
2012-01-01
Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to
A simple stochastic model for dipole moment fluctuations in numerical dynamo simulations
Directory of Open Access Journals (Sweden)
Domenico G. eMeduri
2016-04-01
Full Text Available Earth's axial dipole field changes in a complex fashion on many differenttime scales ranging from less than a year to tens of million years.Documenting, analysing, and replicating this intricate signalis a challenge for data acquisition, theoretical interpretation,and dynamo modelling alike. Here we explore whether axial dipole variationscan be described by the superposition of a slow deterministic driftand fast stochastic fluctuations, i.e. by a Langevin-type system.The drift term describes the time averaged behaviour of the axial dipole variations,whereas the stochastic part mimics complex flow interactions over convective time scales.The statistical behaviour of the system is described by a Fokker-Planck equation whichallows useful predictions, including the average rates of dipole reversals and excursions.We analyse several numerical dynamo simulations, most of which havebeen integrated particularly long in time, and also the palaeomagneticmodel PADM2M which covers the past 2 Myr.The results show that the Langevin description provides a viable statistical modelof the axial dipole variations on time scales longer than about 1 kyr.For example, the axial dipole probability distribution and the average reversalrate are successfully predicted.The exception is PADM2M where the stochastic model reversal rate seems too low.The dependence of the drift on the axial dipolemoment reveals the nonlinear interactions that establish thedynamo balance. A separate analysis of inductive and diffusive magnetic effectsin three dynamo simulations suggests that the classical quadraticquenching of induction predicted by mean-field theory seems at work.
Mélykúti, Bence
2010-01-01
The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when simulating (bio)chemical kinetics. The CLE model is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. The contribution of this work is that we observe and explore that the CLE is not a single equation, but a parametric family of equations, all of which give the same finite-dimensional distribution of the variables. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix. On the practical side, we show that in the case where there are m1 pairs of reversible reactions and m2 irreversible reactions there is another, simple formulation of the CLE with only m1 + m2 Wiener processes, whereas the standard approach uses 2 m1 + m2. We demonstrate that there are considerable computational savings when using this latter formulation. Such transformations of the CLE do not cause a loss of accuracy and are therefore distinct from model reduction techniques. We illustrate our findings by considering alternative formulations of the CLE for a human ether a-go-go related gene ion channel model and the Goldbeter-Koshland switch. © 2010 American Institute of Physics.
International Nuclear Information System (INIS)
Camargo, Dayana Q. de; Bodmann, Bardo E.J.; Vilhena, Marco T. de; Froehlich, Herberth B.
2011-01-01
In this work we developed a stochastic model to simulate neutron transport in a heterogeneous environment, considering continuous neutron spectra and the nuclear properties with its continuous dependence on energy. This model was implemented using the Monte Carlo method for the propagation of neutrons in different environments. Due to restrictions with respect to the number of neutrons that can be simulated in reasonable computational time we introduced a variable control volume together with (pseudo-) periodic boundary conditions in order to overcome this problem. This study allowed a detailed analysis of the influence of energy on the neutron population and its impact on the life cycle of neutrons. From the results, even for a simple geometrical arrangement, we can conclude that there is need to consider the energy dependence and hence defined a spectral effective multiplication factor per Monte Carlo step. (author)
Sedwards, Sean; Mazza, Tommaso
2007-10-15
Compartments and membranes are the basis of cell topology and more than 30% of the human genome codes for membrane proteins. While it is possible to represent compartments and membrane proteins in a nominal way with many mathematical formalisms used in systems biology, few, if any, explicitly model the topology of the membranes themselves. Discrete stochastic simulation potentially offers the most accurate representation of cell dynamics. Since the details of every molecular interaction in a pathway are often not known, the relationship between chemical species in not necessarily best described at the lowest level, i.e. by mass action. Simulation is a form of computer-aided analysis, relying on human interpretation to derive meaning. To improve efficiency and gain meaning in an automatic way, it is necessary to have a formalism based on a model which has decidable properties. We present Cyto-Sim, a stochastic simulator of membrane-enclosed hierarchies of biochemical processes, where the membranes comprise an inner, outer and integral layer. The underlying model is based on formal language theory and has been shown to have decidable properties (Cavaliere and Sedwards, 2006), allowing formal analysis in addition to simulation. The simulator provides variable levels of abstraction via arbitrary chemical kinetics which link to ordinary differential equations. In addition to its compact native syntax, Cyto-Sim currently supports models described as Petri nets, can import all versions of SBML and can export SBML and MATLAB m-files. Cyto-Sim is available free, either as an applet or a stand-alone Java program via the web page (http://www.cosbi.eu/Rpty_Soft_CytoSim.php). Other versions can be made available upon request.
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
Directory of Open Access Journals (Sweden)
Hepburn Iain
2012-05-01
Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates
Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas
Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.
2012-12-01
Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.
Directory of Open Access Journals (Sweden)
Daniel J Klein
Full Text Available Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text], stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%. Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.
Simulation of Higher-Order Electrical Circuits with Stochastic Parameters via SDEs
Directory of Open Access Journals (Sweden)
BRANCIK, L.
2013-02-01
Full Text Available The paper deals with a technique for the simulation of higher-order electrical circuits with parameters varying randomly. The principle consists in the utilization of the theory of stochastic differential equations (SDE, namely the vector form of the ordinary SDEs. Random changes of both excitation voltage and some parameters of passive circuit elements are considered, and circuit responses are analyzed. The voltage and/or current responses are computed and represented in the form of the sample means accompanied by their confidence intervals to provide reliable estimates. The method is applied to analyze responses of the circuit models of optional orders, specially those consisting of a cascade connection of the RLGC networks. To develop the model equations the state-variable method is used, afterwards a corresponding vector SDE is formulated and a stochastic Euler numerical method applied. To verify the results the deterministic responses are also computed by the help of the PSpice simulator or the numerical inverse Laplace transforms (NILT procedure in MATLAB, while removing random terms from the circuit model.
International Nuclear Information System (INIS)
Yao, Jian
2014-01-01
Highlights: • Driving factor for adjustment of manually controlled solar shades was determined. • A stochastic model for manual solar shades was constructed using Markov method. • Co-simulation with Energyplus was carried out in BCVTB. • External shading even manually controlled should be used prior to LOW-E windows. • Previous studies on manual solar shades may overestimate energy savings. - Abstract: Solar shading devices play a significant role in reducing building energy consumption and maintaining a comfortable indoor condition. In this paper, a typical office building with internal roller shades in hot summer and cold winter zone was selected to determine the driving factor of control behavior of manual solar shades. Solar radiation was determined as the major factor in driving solar shading adjustment based on field measurements and logit analysis and then a stochastic model for manually adjusted solar shades was constructed by using Markov method. This model was used in BCVTB for further co-simulation with Energyplus to determine the impact of the control behavior of solar shades on energy performance. The results show that manually adjusted solar shades, whatever located inside or outside, have a relatively high energy saving performance than clear-pane windows while only external shades perform better than regularly used LOW-E windows. Simulation also indicates that using an ideal assumption of solar shade adjustment as most studies do in building simulation may lead to an overestimation of energy saving by about 16–30%. There is a need to improve occupants’ actions on shades to more effectively respond to outdoor conditions in order to lower energy consumption, and this improvement can be easily achieved by using simple strategies as a guide to control manual solar shades
Crisan, Dan
2011-01-01
"Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa
Meta-stochastic simulation of biochemical models for systems and synthetic biology.
Sanassy, Daven; Widera, Paweł; Krasnogor, Natalio
2015-01-16
Stochastic simulation algorithms (SSAs) are used to trace realistic trajectories of biochemical systems at low species concentrations. As the complexity of modeled biosystems increases, it is important to select the best performing SSA. Numerous improvements to SSAs have been introduced but they each only tend to apply to a certain class of models. This makes it difficult for a systems or synthetic biologist to decide which algorithm to employ when confronted with a new model that requires simulation. In this paper, we demonstrate that it is possible to determine which algorithm is best suited to simulate a particular model and that this can be predicted a priori to algorithm execution. We present a Web based tool ssapredict that allows scientists to upload a biochemical model and obtain a prediction of the best performing SSA. Furthermore, ssapredict gives the user the option to download our high performance simulator ngss preconfigured to perform the simulation of the queried biochemical model with the predicted fastest algorithm as the simulation engine. The ssapredict Web application is available at http://ssapredict.ico2s.org. It is free software and its source code is distributed under the terms of the GNU Affero General Public License.
Calibration of semi-stochastic procedure for simulating high-frequency ground motions
Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert
2013-01-01
Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw 100 km).
Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model
International Nuclear Information System (INIS)
Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S
1996-01-01
Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain
Verification of HYDRASTAR - A code for stochastic continuum simulation of groundwater flow
International Nuclear Information System (INIS)
Norman, S.
1991-07-01
HYDRASTAR is a code developed at Starprog AB for use in the SKB 91 performance assessment project with the following principal function: - Reads the actual conductivity measurements from a file created from the data base GEOTAB. - Regularizes the measurements to a user chosen calculation scale. - Generates three dimensional unconditional realizations of the conductivity field by using a supplied model of the conductivity field as a stochastic function. - Conditions the simulated conductivity field on the actual regularized measurements. - Reads the boundary conditions from a regional deterministic NAMMU computation. - Calculates the hydraulic head field, Darcy velocity field, stream lines and water travel times by solving the stationary hydrology equation and the streamline equation obtained with the velocities calculated from Darcy's law. - Generates visualizations of the realizations if desired. - Calculates statistics such as semivariograms and expectation values of the output fields by repeating the above procedure by iterations of the Monte Carlo type. When using computer codes for safety assessment purpose validation and verification of the codes are important. Thus this report describes a work performed with the goal of verifying parts of HYDRASTAR. The verification described in this report uses comparisons with two other solutions of related examples: A. Comparison with a so called perturbation solution of the stochastical stationary hydrology equation. This as an analytical approximation of the stochastical stationary hydrology equation valid in the case of small variability of the unconditional random conductivity field. B. Comparison with the (Hydrocoin, 1988), case 2. This is a classical example of a hydrology problem with a deterministic conductivity field. The principal feature of the problem is the presence of narrow fracture zones with high conductivity. the compared output are the hydraulic head field and a number of stream lines originating from a
Energy Technology Data Exchange (ETDEWEB)
El Ouassini, Ayoub [Ecole Polytechnique de Montreal, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: ayoub.el-ouassini@polymtl.ca; Saucier, Antoine [Ecole Polytechnique de Montreal, departement de mathematiques et de genie industriel, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: antoine.saucier@polymtl.ca; Marcotte, Denis [Ecole Polytechnique de Montreal, departement de genie civil, geologique et minier, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: denis.marcotte@polymtl.ca; Favis, Basil D. [Ecole Polytechnique de Montreal, departement de genie chimique, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: basil.favis@polymtl.ca
2008-04-15
We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph.
A stochastic simulator of a blood product donation environment with demand spikes and supply shocks.
An, Ming-Wen; Reich, Nicholas G; Crawford, Stephen O; Brookmeyer, Ron; Louis, Thomas A; Nelson, Kenrad E
2011-01-01
The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an 8-week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during 1996-2005. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts.
Lee, Taesam
2018-05-01
Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.
STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L. PASTURES IN MARIN, N. L., MEXICO
Directory of Open Access Journals (Sweden)
JosÃ© Romualdo MartÃnez-LÃ³pez
2014-04-01
Full Text Available A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L. and its dry matter intake by cattle, in MarÃn, NL, MÃ©xico. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of dry matter of green grass, as well as its conversion to senescence and dry grass and eventually to mulch, depending on precipitation and temperature. Model also simulates consumption of green and dry grass for cattle. The stocking rate used in the model simulation was 2 hectares per animal unit. Annual production ranged from 4.5 to 10.2 t of dry matter per hectare with annual rainfall of 300 to 704 mm, respectively. Total annual intake required per animal unit was estimated at 3.6 ton. Simulated net primary production coincides with reports in the literature, so the model was evaluated successfully.
International Nuclear Information System (INIS)
El Ouassini, Ayoub; Saucier, Antoine; Marcotte, Denis; Favis, Basil D.
2008-01-01
We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph
Mostert, P F; Bokkers, E A M; van Middelaar, C E; Hogeveen, H; de Boer, I J M
2018-01-01
The objective of this study was to estimate the economic impact of subclinical ketosis (SCK) in dairy cows. This metabolic disorder occurs in the period around calving and is associated with an increased risk of other diseases. Therefore, SCK affects farm productivity and profitability. Estimating the economic impact of SCK may make farmers more aware of this problem, and can improve their decision-making regarding interventions to reduce SCK. We developed a dynamic stochastic simulation model that enables estimating the economic impact of SCK and related diseases (i.e. mastitis, metritis, displaced abomasum, lameness and clinical ketosis) occurring during the first 30 days after calving. This model, which was applied to a typical Dutch dairy herd, groups cows according to their parity (1 to 5+), and simulates the dynamics of SCK and related diseases, and milk production per cow during one lactation. The economic impact of SCK and related diseases resulted from a reduced milk production, discarded milk, treatment costs, costs from a prolonged calving interval and removal (culling or dying) of cows. The total costs of SCK were €130 per case per year, with a range between €39 and €348 (5 to 95 percentiles). The total costs of SCK per case per year, moreover, increased from €83 per year in parity 1 to €175 in parity 3. Most cows with SCK, however, had SCK only (61%), and costs were €58 per case per year. Total costs of SCK per case per year resulted for 36% from a prolonged calving interval, 24% from reduced milk production, 19% from treatment, 14% from discarded milk and 6% from removal. Results of the sensitivity analysis showed that the disease incidence, removal risk, relations of SCK with other diseases and prices of milk resulted in a high variation of costs of SCK. The costs of SCK, therefore, might differ per farm because of farm-specific circumstances. Improving data collection on the incidence of SCK and related diseases, and on consequences of
A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects
Energy Technology Data Exchange (ETDEWEB)
Samin, Adib J. [The Department of Mechanical and Aerospace Engineering, The Ohio State University, 201 W 19" t" h Avenue, Columbus, Ohio 43210 (United States)
2016-05-15
In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.
A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects
International Nuclear Information System (INIS)
Samin, Adib J.
2016-01-01
In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.
Experiments and stochastic simulations of lignite coal during pyrolysis and gasification
International Nuclear Information System (INIS)
Ahmed, I.I.; Gupta, A.K.
2013-01-01
Highlights: ► Lignite pyrolysis and gasification has been conducted in a semi batch reactor. ► The objective is to understand mechanism of syngas evolution during pyrolysis. ► Stochastic simulations of lignite pyrolysis were conducted using Gillespie algorithm. ► First order, single step mechanism failed to fit cumulative yield of hydrogen. ► Evolution of hydrogen via pyrolysis of gaseous hydrocarbon following bridges scission. -- Abstract: Lignite pyrolysis and gasification has been conducted in a semi batch reactor at reactor temperatures of 800–950 °C in 50 °C intervals. CO 2 has been used as the gasifying agent for gasification experiments. The objective of this investigation is to understand the mechanism of syngas evolution during pyrolysis and to unravel the effect of CO 2 on pyrolysis mechanism. Stochastic simulations of lignite pyrolysis have been conducted using Gillespie algorithm. Two reaction mechanisms have been used in the simulations; first order, single step mechanism and the FLASHCHAIN mechanism. The first order single step mechanism was successful in fitting cumulative yield of CO 2 , CO, CH 4 and other hydrocarbons (C n H m ). The first order, single step failed to fit the cumulative yield of hydrogen, which suggests a more complex mechanism for hydrogen evolution. Evolution of CO 2 , CO, CH 4 , C n H m and H 2 flow rates has been monitored. The only effect of CO 2 on pyrolysis mechanism is promotion of reverse water gas shift reaction for the experiments described here. Methane evolution extended for slightly longer time than other hydrocarbons and hydrogen evolution extended for a slightly longer time than methane. This indicated the evolution of hydrogen via further pyrolysis of aliphatic hydrocarbon. It is also suggested that this step occurs in series after aliphatic hydrocarbons evolution by bridges scission.
International Nuclear Information System (INIS)
Petrus Zacharias; Abdul Jami
2010-01-01
Researches conducted by Batan's researchers have resulted in a number competences that can be used to produce goods and services, which will be applied to industrial sector. However, there are difficulties how to convey and utilize the R and D products into industrial sector. Evaluation results show that each research result should be completed with techno-economy analysis to obtain the feasibility of a product for industry. Further analysis on multy-product concept, in which one business can produce many main products, will be done. For this purpose, a software package simulating techno-economy I economic feasibility which uses deterministic and stochastic data (Monte Carlo method) was been carried out for multi-product including side product. The programming language used in Visual Basic Studio Net 2003 and SQL as data base processing software. This software applied sensitivity test to identify which investment criteria is sensitive for the prospective businesses. Performance test (trial test) has been conducted and the results are in line with the design requirement, such as investment feasibility and sensitivity displayed deterministically and stochastically. These result can be interpreted very well to support business decision. Validation has been performed using Microsoft Excel (for single product). The result of the trial test and validation show that this package is suitable for demands and is ready for use. (author)
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-08-15
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.
International Nuclear Information System (INIS)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man
2015-01-01
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media
Energy Technology Data Exchange (ETDEWEB)
Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)
2017-07-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain
Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida
2013-04-01
Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.
International Nuclear Information System (INIS)
Guerrier, C.; Holcman, D.
2017-01-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
International Nuclear Information System (INIS)
Schwen, E M; Mazilu, I; Mazilu, D A
2015-01-01
We introduce a stochastic cooperative model for particle deposition and evaporation relevant to ionic self-assembly of nanoparticles with applications in surface fabrication and nanomedicine, and present a method for mapping our model onto the Ising model. The mapping process allows us to use the established results for the Ising model to describe the steady-state properties of our system. After completing the mapping process, we investigate the time dependence of particle density using the mean field approximation. We complement this theoretical analysis with Monte Carlo simulations that support our model. These techniques, which can be used separately or in combination, are useful as pedagogical tools because they are tractable mathematically and they apply equally well to many other physical systems with nearest-neighbour interactions including voter and epidemic models. (paper)
Cambridge Rocketry Simulator – A Stochastic Six-Degrees-of-Freedom Rocket Flight Simulator
Eerland, Willem J.; Box, Simon; Sóbester, András
2017-01-01
The Cambridge Rocketry Simulator can be used to simulate the flight of unguided rockets for both design and operational applications. The software consists of three parts: The first part is a GUI that enables the user to design a rocket. The second part is a verified and peer-reviewed physics model that simulates the rocket flight. This includes a Monte Carlo wrapper to model the uncertainty in the rocket’s dynamics and the atmospheric conditions. The third part generates visualizations of th...
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
International Nuclear Information System (INIS)
Ehlert, Kurt; Loewe, Laurence
2014-01-01
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise
Cambridge Rocketry Simulator – A Stochastic Six-Degrees-of-Freedom Rocket Flight Simulator
Directory of Open Access Journals (Sweden)
Willem J. Eerland
2017-02-01
Full Text Available The Cambridge Rocketry Simulator can be used to simulate the flight of unguided rockets for both design and operational applications. The software consists of three parts: The first part is a GUI that enables the user to design a rocket. The second part is a verified and peer-reviewed physics model that simulates the rocket flight. This includes a Monte Carlo wrapper to model the uncertainty in the rocket’s dynamics and the atmospheric conditions. The third part generates visualizations of the resulting trajectories, including nominal performance and uncertainty analysis, e.g. a splash-down region with confidence bounds. The project is available on SourceForge, and is written in Java (GUI, C++ (simulation core, and Python (visualization. While all parts can be executed from the GUI, the three components share information via XML, accommodating modifications, and re-use of individual components.
Schmandt, Nicolaus T; Galán, Roberto F
2012-09-14
Markov chains provide realistic models of numerous stochastic processes in nature. We demonstrate that in any Markov chain, the change in occupation number in state A is correlated to the change in occupation number in state B if and only if A and B are directly connected. This implies that if we are only interested in state A, fluctuations in B may be replaced with their mean if state B is not directly connected to A, which shortens computing time considerably. We show the accuracy and efficacy of our approximation theoretically and in simulations of stochastic ion-channel gating in neurons.
Effluent trading in river systems through stochastic decision-making process: a case study.
Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh
2017-09-01
The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method
Directory of Open Access Journals (Sweden)
Youness El Ansari
2017-01-01
Full Text Available We investigate the various conditions that control the extinction and stability of a nonlinear mathematical spread model with stochastic perturbations. This model describes the spread of viruses into an infected computer network which is powered by a system of antivirus software. The system is analyzed by using the stability theory of stochastic differential equations and the computer simulations. First, we study the global stability of the virus-free equilibrium state and the virus-epidemic equilibrium state. Furthermore, we use the Itô formula and some other theoretical theorems of stochastic differential equation to discuss the extinction and the stationary distribution of our system. The analysis gives a sufficient condition for the infection to be extinct (i.e., the number of viruses tends exponentially to zero. The ergodicity of the solution and the stationary distribution can be obtained if the basic reproduction number Rp is bigger than 1, and the intensities of stochastic fluctuations are small enough. Numerical simulations are carried out to illustrate the theoretical results.
Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M
2017-10-01
Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.
Fiore, Andrew M.; Swan, James W.
2018-01-01
equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.
An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays
Directory of Open Access Journals (Sweden)
Laurenzi Ian J
2009-12-01
Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.
DEFF Research Database (Denmark)
Sørensen, J.T.; Enevoldsen, Carsten; Houe, H.
1995-01-01
A dynamic, stochastic model simulating the technical and economic consequences of bovine virus diarrhoea virus (BVDV) infections for a dairy cattle herd for use on a personal computer was developed. The production and state changes of the herd were simulated by state changes of the individual cows...... and heifers. All discrete events at the cow level were triggered stochastically. Each cow and heifer was characterized by state variables such as stage of lactation, parity, oestrous status, decision for culling, milk production potential, and immune status for BVDV. The model was controlled by 170 decision...... variables describing biologic and management variables including 21 decision variables describing the effect of BVDV infection on the production of the individual animal. Two markedly different scenarios were simulated to demonstrate the behaviour of the developed model and the potentials of the applied...
Research on stochastic power-flow study methods. Final report
Energy Technology Data Exchange (ETDEWEB)
Heydt, G. T. [ed.
1981-01-01
A general algorithm to determine the effects of uncertainty in bus load and generation on the output of conventional power flow analysis is presented. The use of statistical moments is presented and developed as a means for representing the stochastic process. Statistical moments are used to describe the uncertainties, and facilitate the calculations of single and multivarlate probability density functions of input and output variables. The transformation of the uncertainty through the power flow equations is made by the expansion of the node equations in a multivariate Taylor series about an expected operating point. The series is truncated after the second order terms. Since the power flow equations are nonlinear, the expected values of output quantities is in general not the solution to the conventional load flow problem using expected values of input quantities. The second order transformation offers a correction vector and allows the consideration of larger uncertainties which have caused significant error in the current linear transformation algorithms. Voltage controlled busses are included with consideration of upper and lower limits. The finite reactive power available at generation sites, and fixed ranges of transformer tap movement may have a significant effect on voltage and line power flow statistics. A method is given which considers limitation constraints in the evaluation of all output quantities. The bus voltages, line power flows, transformer taps, and generator reactive power requirements are described by their statistical moments. Their values are expressed in terms of the probability that they are above or below specified limits, and their expected values given that they do fall outside the limits. Thus the algorithm supplies information about severity of overload as well as probability of occurrence. An example is given for an eleven bus system, evaluating each quantity separately. The results are compared with Monte Carlo simulation.
International Nuclear Information System (INIS)
Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-01-01
of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth. - Highlights: • A hybrid method for stochastic multi-scale models of cells populations that extends existing hybrid methods for reaction–diffusion system. • Our analysis unveils non-trivial macroscopic effects triggered by noise at the level of structuring variables. • Our hybrid method hugely speeds up age-structured SSA simulations while preserving stochastic effects.
Nunno, Giulia
2016-01-01
These Proceedings offer a selection of peer-reviewed research and survey papers by some of the foremost international researchers in the fields of finance, energy, stochastics and risk, who present their latest findings on topical problems. The papers cover the areas of stochastic modeling in energy and financial markets; risk management with environmental factors from a stochastic control perspective; and valuation and hedging of derivatives in markets dominated by renewables, all of which further develop the theory of stochastic analysis and mathematical finance. The papers were presented at the first conference on “Stochastics of Environmental and Financial Economics (SEFE)”, being part of the activity in the SEFE research group of the Centre of Advanced Study (CAS) at the Academy of Sciences in Oslo, Norway during the 2014/2015 academic year.
A higher-order numerical framework for stochastic simulation of chemical reaction systems.
Szé kely, Tamá s; Burrage, Kevin; Erban, Radek; Zygalakis, Konstantinos C
2012-01-01
, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate
StochPy: A Comprehensive, User-Friendly Tool for Simulating Stochastic Biological Processes
T.R. Maarleveld (Timo); B.G. Olivier (Brett); F.J. Bruggeman (Frank)
2013-01-01
htmlabstractSingle-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models
International Nuclear Information System (INIS)
Haran, O.; Shvarts, D.; Thieberger, R.
1998-01-01
Classical transport of neutral particles in a binary, scattering, stochastic media is discussed. It is assumed that the cross-sections of the constituent materials and their volume fractions are known. The inner structure of the media is stochastic, but there exist a statistical knowledge about the lump sizes, shapes and arrangement. The transmission through the composite media depends on the specific heterogeneous realization of the media. The current research focuses on the averaged transmission through an ensemble of realizations, frm which an effective cross-section for the media can be derived. The problem of one dimensional transport in stochastic media has been studied extensively [1]. In the one dimensional description of the problem, particles are transported along a line populated with alternating material segments of random lengths. The current work discusses transport in two-dimensional stochastic media. The phenomenon that is unique to the multi-dimensional description of the problem is obstacle bypassing. Obstacle bypassing tends to reduce the opacity of the media, thereby reducing its effective cross-section. The importance of this phenomenon depends on the manner in which the obstacles are arranged in the media. Results of transport simulations in multi-dimensional stochastic media are presented. Effective cross-sections derived from the simulations are compared against those obtained for the one-dimensional problem, and against those obtained from effective multi-dimensional models, which are partially based on a Markovian assumption
Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance
Kato, H.; Ito, K.
2009-01-01
A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.
Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling
Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.
2016-11-01
A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is
Evaluating Economic Alternatives for Wood Energy Supply Based on Stochastic Simulation
Directory of Open Access Journals (Sweden)
Ulises Flores Hernández
2018-04-01
Full Text Available Productive forests, as a major source of biomass, represent an important pre-requisite for the development of a bio-economy. In this respect, assessments of biomass availability, efficiency of forest management, forest operations, and economic feasibility are essential. This is certainly the case for Mexico, a country with an increasing energy demand and a considerable potential for sustainable forest utilization. Hence, this paper focuses on analyzing economic alternatives for the Mexican bioenergy supply based on the costs and revenues of utilizing woody biomass residues. With a regional spatial approach, harvesting and transportation costs of utilizing selected biomass residues were stochastically calculated using Monte Carlo simulations. A sensitivity analysis of percentage variation of the most probable estimate in relation to the parameters price and cost for one alternative using net future analysis was conducted. Based on the results for the northern region, a 10% reduction of the transportation cost would reduce overall supply cost, resulting in a total revenue of 13.69 USD/m3 and 0.75 USD/m3 for harvesting residues and non-extracted stand residues, respectively. For the central south region, it is estimated that a contribution of 16.53 USD/m3 from 2013 and a total revenue of 33.00 USD/m3 in 2030 from sawmill residues will improve the value chain. The given approach and outputs provide the basis for the decision-making process regarding forest utilization towards energy generation based on economic indicators.
International Nuclear Information System (INIS)
Camargo, Dayana Queiroz de
2011-01-01
This thesis has developed a stochastic model to simulate the neutrons transport in a heterogeneous environment, considering continuous neutron spectra and the nuclear properties with its continuous dependence on energy. This model was implemented using Monte Carlo method for the propagation of neutrons in different environment. Due to restrictions with respect to the number of neutrons that can be simulated in reasonable computational processing time introduced the variable control volume along the (pseudo-) periodic boundary conditions in order to overcome this problem. The choice of class physical Monte Carlo is due to the fact that it can decompose into simpler constituents the problem of solve a transport equation. The components may be treated separately, these are the propagation and interaction while respecting the laws of energy conservation and momentum, and the relationships that determine the probability of their interaction. We are aware of the fact that the problem approached in this thesis is far from being comparable to building a nuclear reactor, but this discussion the main target was to develop the Monte Carlo model, implement the code in a computer language that allows extensions of modular way. This study allowed a detailed analysis of the influence of energy on the neutron population and its impact on the life cycle of neutrons. From the results, even for a simple geometrical arrangement, we can conclude the need to consider the energy dependence, i.e. an spectral effective multiplication factor should be introduced each energy group separately. (author)
International Nuclear Information System (INIS)
Follin, S.
1992-12-01
Flow in fractured crystalline (hard) rocks is of interest in Sweden for assessing the postclosure radiological safety of a deep repository for high-level nuclear waste. For simulation of flow and mass transport in the far field different porous media concepts are often used, whereas discrete fracture/channel network concepts are often used for near-field simulations. Due to lack of data, it is generally necessary to have resort to single-hole double-packer test data for the far-field simulations, i.e., test data on a small scale are regularized in order to fit a comparatively coarser numerical discretization, which is governed by various computational constraints. In the present study the Monte Carlo method is used to investigate the relationship between the transmissivity value interpreted and the corresponding radius of influence in conjunction with single-hole double-packer tests in heterogeneous formations. The numerical flow domain is treated as a two-dimensional heterogeneous porous medium with a spatially varying diffusivity on 3 m scale. The Monte Carlo simulations demonstrate the sensitivity to the correlation range of a spatially varying diffusivity field. In contradiction to what is tacitly assumed in stochastic subsurface hydrology, the results show that the lateral support scale (e.g., the radius of influence) of transmissivity measurements in heterogeneous porous media is a random variable, which is affected by both the hydraulic and statistical characteristics. If these results are general, the traditional methods for scaling-up, assuming a constant lateral scale of support and a multi normal distribution, may lead to an underestimation of the persistence and connectivity of transmissive zones, particularly in highly heterogeneous porous media
Subcellular Location of PKA Controls Striatal Plasticity: Stochastic Simulations in Spiny Dendrites
Oliveira, Rodrigo F.; Kim, MyungSook; Blackwell, Kim T.
2012-01-01
Dopamine release in the striatum has been implicated in various forms of reward dependent learning. Dopamine leads to production of cAMP and activation of protein kinase A (PKA), which are involved in striatal synaptic plasticity and learning. PKA and its protein targets are not diffusely located throughout the neuron, but are confined to various subcellular compartments by anchoring molecules such as A-Kinase Anchoring Proteins (AKAPs). Experiments have shown that blocking the interaction of PKA with AKAPs disrupts its subcellular location and prevents LTP in the hippocampus and striatum; however, these experiments have not revealed whether the critical function of anchoring is to locate PKA near the cAMP that activates it or near its targets, such as AMPA receptors located in the post-synaptic density. We have developed a large scale stochastic reaction-diffusion model of signaling pathways in a medium spiny projection neuron dendrite with spines, based on published biochemical measurements, to investigate this question and to evaluate whether dopamine signaling exhibits spatial specificity post-synaptically. The model was stimulated with dopamine pulses mimicking those recorded in response to reward. Simulations show that PKA colocalization with adenylate cyclase, either in the spine head or in the dendrite, leads to greater phosphorylation of DARPP-32 Thr34 and AMPA receptor GluA1 Ser845 than when PKA is anchored away from adenylate cyclase. Simulations further demonstrate that though cAMP exhibits a strong spatial gradient, diffusible DARPP-32 facilitates the spread of PKA activity, suggesting that additional inactivation mechanisms are required to produce spatial specificity of PKA activity. PMID:22346744
International Nuclear Information System (INIS)
Fivaz, M.; Fasoli, A.; Appert, K.; Trans, T.M.; Tran, M.Q.; Skiff, F.
1993-08-01
Dynamical chaos is produced by the interaction between plasma particles and two electrostatic waves. Experiments performed in a linear magnetized plasma and a 1D particle-in-cell simulation agree qualitatively: above a threshold wave amplitude, ion stochastic diffusion and heating occur on a fast time scale. Self-consistency appears to limit the extent of the heating process. (author) 5 figs., 18 refs
International Nuclear Information System (INIS)
Nemnes, G A; Anghel, D V
2010-01-01
We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size
DEFF Research Database (Denmark)
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song
2017-01-01
The present numerical study aims to assess the performance of an Eulerian Stochastic Field (ESF) model in simulating spray flames produced by three fuel injectors with different nozzle diameters of 100 μm, 180 μm and 363 μm. A comparison to the measurements shows that although the simulated ignit...... serve as an important tool for the simulation of spray flames in marine diesel engines, where fuel injectors with different nozzle diameters are applied for pilot and main injections.......The present numerical study aims to assess the performance of an Eulerian Stochastic Field (ESF) model in simulating spray flames produced by three fuel injectors with different nozzle diameters of 100 μm, 180 μm and 363 μm. A comparison to the measurements shows that although the simulated...... ignition delay times are consistently overestimated, the relative differences remain below 28%. Furthermore, the change of the averaged pressure rise with respect to the variation of nozzle diameter is captured by the model. The simulated flame lift-off lengths also agree with the measurements...
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.
A simple stochastic rainstorm generator for simulating spatially and temporally varying rainfall
Singer, M. B.; Michaelides, K.; Nichols, M.; Nearing, M. A.
2016-12-01
In semi-arid to arid drainage basins, rainstorms often control both water supply and flood risk to marginal communities of people. They also govern the availability of water to vegetation and other ecological communities, as well as spatial patterns of sediment, nutrient, and contaminant transport and deposition on local to basin scales. All of these landscape responses are sensitive to changes in climate that are projected to occur throughout western North America. Thus, it is important to improve characterization of rainstorms in a manner that enables statistical assessment of rainfall at spatial scales below that of existing gauging networks and the prediction of plausible manifestations of climate change. Here we present a simple, stochastic rainstorm generator that was created using data from a rich and dense network of rain gauges at the Walnut Gulch Experimental Watershed (WGEW) in SE Arizona, but which is applicable anywhere. We describe our methods for assembling pdfs of relevant rainstorm characteristics including total annual rainfall, storm area, storm center location, and storm duration. We also generate five fitted intensity-duration curves and apply a spatial rainfall gradient to generate precipitation at spatial scales below gauge spacing. The model then runs by Monte Carlo simulation in which a total annual rainfall is selected before we generate rainstorms until the annual precipitation total is reached. The procedure continues for decadal simulations. Thus, we keep track of the hydrologic impact of individual storms and the integral of precipitation over multiple decades. We first test the model using ensemble predictions until we reach statistical similarity to the input data from WGEW. We then employ the model to assess decadal precipitation under simulations of climate change in which we separately vary the distribution of total annual rainfall (trend in moisture) and the intensity-duration curves used for simulation (trends in storminess). We
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.; Sridevi, B.; Maneesha, K.; SujithKumar, S.; Prasanna, P.L.; Murthy, K.S.R.
of Bengal during south-west monsoon season and explore possibility to reconstruct the acoustic profile of the eddy by Stochastic Inverse Technique. A simulation experiment on forward and inverse problems for observed sound velocity perturbation field has...
Contribution to the stochastically studies of space-time dependable hydrological processes
International Nuclear Information System (INIS)
Kjaevski, Ivancho
2002-12-01
One of the fundaments of today's planning and water economy is Science of Hydrology. Science of Hydrology through the history had followed the development of the water management systems. Water management systems, during the time from single-approach evolved to complex and multi purpose systems. The dynamic and development of the today's society contributed for increasing the demand of clean water, and in the same time, the resources of clean water in the nature are reduced. In this kind of conditions, water management systems should resolve problems that are more complicated during managing of water sources. Solving the problems in water management, enable development and applying new methods and technologies in planning and management with water resources and water management systems like: systematical analyses, operational research, hierarchy decisions, expert systems, computer technology etc. Planning and management of water sources needs historical measured data for hydro metrological processes. In our country there are data of hydro metrological processes in period of 50-70, but in some Europe countries there are data more than 100 years. Water economy trends follow the hydro metrological trend research. The basic statistic techniques like sampling, probability distribution function, correlation and regression, are used about one intended and simple water management problems. Solving new problems about water management needs using of space-time stochastic technique, modem mathematical and statistical techniques during simulation and optimization of complex water systems. We need tree phases of development of the techniques to get secure hydrological models: i) Estimate the quality of hydro meteorological data, analyzing of their consistency, and homogeneous; ii) Structural analyze of hydro meteorological processes; iii) Mathematical models for modeling hydro meteorological processes. Very often, the third phase is applied for analyzing and modeling of hydro
Comparison of stochastic models in Monte Carlo simulation of coated particle fuels
International Nuclear Information System (INIS)
Yu Hui; Nam Zin Cho
2013-01-01
There is growing interest worldwide in very high temperature gas cooled reactors as candidates for next generation reactor systems. For design and analysis of such reactors with double heterogeneity introduced by the coated particle fuels that are randomly distributed in graphite pebbles, stochastic transport models are becoming essential. Several models were reported in the literature, such as coarse lattice models, fine lattice stochastic (FLS) models, random sequential addition (RSA) models, metropolis models. The principles and performance of these stochastic models are described and compared in this paper. Compared with the usual fixed lattice methods, sub-FLS modeling allows more realistic stochastic distribution of fuel particles and thus results in more accurate criticality calculation. Compared with the basic RSA method, sub-FLS modeling requires simpler and more efficient overlapping checking procedure. (authors)
DEFF Research Database (Denmark)
Jensen, Karsten Høgh; Mantoglou, Aristotelis
1992-01-01
unsaturated flow equation representing the mean system behavior is solved using a finite difference numerical solution technique. The effective parameters are evaluated from the stochastic theory formulas before entering them into the numerical solution for each iteration. The stochastic model is applied...... seems to offer a rational framework for modeling large-scale unsaturated flow and estimating areal averages of soil-hydrological processes in spatially variable soils....
Numerical simulation of stochastic point kinetic equation in the dynamical system of nuclear reactor
International Nuclear Information System (INIS)
Saha Ray, S.
2012-01-01
Highlights: ► In this paper stochastic neutron point kinetic equations have been analyzed. ► Euler–Maruyama method and Strong Taylor 1.5 order method have been discussed. ► These methods are applied for the solution of stochastic point kinetic equations. ► Comparison between the results of these methods and others are presented in tables. ► Graphs for neutron and precursor sample paths are also presented. -- Abstract: In the present paper, the numerical approximation methods, applied to efficiently calculate the solution for stochastic point kinetic equations () in nuclear reactor dynamics, are investigated. A system of Itô stochastic differential equations has been analyzed to model the neutron density and the delayed neutron precursors in a point nuclear reactor. The resulting system of Itô stochastic differential equations are solved over each time-step size. The methods are verified by considering different initial conditions, experimental data and over constant reactivities. The computational results indicate that the methods are simple and suitable for solving stochastic point kinetic equations. In this article, a numerical investigation is made in order to observe the random oscillations in neutron and precursor population dynamics in subcritical and critical reactors.
Directory of Open Access Journals (Sweden)
R. Uijlenhoet
2008-03-01
Full Text Available As rainfall constitutes the main source of water for the terrestrial hydrological processes, accurate and reliable measurement and prediction of its spatial and temporal distribution over a wide range of scales is an important goal for hydrology. We investigate the potential of ground-based weather radar to provide such measurements through a theoretical analysis of some of the associated observation uncertainties. A stochastic model of range profiles of raindrop size distributions is employed in a Monte Carlo simulation experiment to investigate the rainfall retrieval uncertainties associated with weather radars operating at X-, C-, and S-band. We focus in particular on the errors and uncertainties associated with rain-induced signal attenuation and its correction for incoherent, non-polarimetric, single-frequency, operational weather radars. The performance of two attenuation correction schemes, the (forward Hitschfeld-Bordan algorithm and the (backward Marzoug-Amayenc algorithm, is analyzed for both moderate (assuming a 50 km path length and intense Mediterranean rainfall (for a 30 km path. A comparison shows that the backward correction algorithm is more stable and accurate than the forward algorithm (with a bias in the order of a few percent for the former, compared to tens of percent for the latter, provided reliable estimates of the total path-integrated attenuation are available. Moreover, the bias and root mean square error associated with each algorithm are quantified as a function of path-averaged rain rate and distance from the radar in order to provide a plausible order of magnitude for the uncertainty in radar-retrieved rain rates for hydrological applications.
Energy Technology Data Exchange (ETDEWEB)
Zarzycki, Piotr [Energy; Institute; Rosso, Kevin M. [Pacific Northwest
2017-06-15
Understanding Fe(II)-catalyzed transformations of Fe(III)- (oxyhydr)oxides is critical for correctly interpreting stable isotopic distributions and for predicting the fate of metal ions in the environment. Recent Fe isotopic tracer experiments have shown that goethite undergoes rapid recrystallization without phase change when exposed to aqueous Fe(II). The proposed explanation is oxidation of sorbed Fe(II) and reductive Fe(II) release coupled 1:1 by electron conduction through crystallites. Given the availability of two tracer exchange data sets that explore pH and particle size effects (e.g., Handler et al. Environ. Sci. Technol. 2014, 48, 11302-11311; Joshi and Gorski Environ. Sci. Technol. 2016, 50, 7315-7324), we developed a stochastic simulation that exactly mimics these experiments, while imposing the 1:1 constraint. We find that all data can be represented by this model, and unifying mechanistic information emerges. At pH 7.5 a rapid initial exchange is followed by slower exchange, consistent with mixed surface- and diffusion-limited kinetics arising from prominent particle aggregation. At pH 5.0 where aggregation and net Fe(II) sorption are minimal, that exchange is quantitatively proportional to available particle surface area and the density of sorbed Fe(II) is more readily evident. Our analysis reveals a fundamental atom exchange rate of ~10-5 Fe nm-2 s-1, commensurate with some of the reported reductive dissolution rates of goethite, suggesting Fe(II) release is the rate-limiting step in the conduction mechanism during recrystallization.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-01
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems
Kuwahara, Hiroyuki; Gao, Xin
2011-01-01
DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a
Manzione, Rodrigo L.; Wendland, Edson; Tanikawa, Diego H.
2012-11-01
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Yifat, Jonathan; Gannot, Israel
2015-03-01
Early detection of malignant tumors plays a crucial role in the survivability chances of the patient. Therefore, new and innovative tumor detection methods are constantly searched for. Tumor-specific magnetic-core nano-particles can be used with an alternating magnetic field to detect and treat tumors by hyperthermia. For the analysis of the method effectiveness, the bio-heat transfer between the nanoparticles and the tissue must be carefully studied. Heat diffusion in biological tissue is usually analyzed using the Pennes Bio-Heat Equation, where blood perfusion plays an important role. Malignant tumors are known to initiate an angiogenesis process, where endothelial cell migration from neighboring vasculature eventually leads to the formation of a thick blood capillary network around them. This process allows the tumor to receive its extensive nutrition demands and evolve into a more progressive and potentially fatal tumor. In order to assess the effect of angiogenesis on the bio-heat transfer problem, we have developed a discrete stochastic 3D model & simulation of tumor-induced angiogenesis. The model elaborates other angiogenesis models by providing high resolution 3D stochastic simulation, capturing of fine angiogenesis morphological features, effects of dynamic sprout thickness functions, and stochastic parent vessel generator. We show that the angiogenesis realizations produced are well suited for numerical bio-heat transfer analysis. Statistical study on the angiogenesis characteristics was derived using Monte Carlo simulations. According to the statistical analysis, we provide analytical expression for the blood perfusion coefficient in the Pennes equation, as a function of several parameters. This updated form of the Pennes equation could be used for numerical and analytical analyses of the proposed detection and treatment method. Copyright © 2014 Elsevier Inc. All rights reserved.
Allore, H G; Schruben, L W; Erb, H N; Oltenacu, P A
1998-03-01
A dynamic stochastic simulation model for discrete events, SIMMAST, was developed to simulate the effect of mastitis on the composition of the bulk tank milk of dairy herds. Intramammary infections caused by Streptococcus agalactiae, Streptococcus spp. other than Strep. agalactiae, Staphylococcus aureus, and coagulase-negative staphylococci were modeled as were the milk, fat, and protein test day solutions for individual cows, which accounted for the fixed effects of days in milk, age at calving, season of calving, somatic cell count (SCC), and random effects of test day, cow yield differences from herdmates, and autocorrelated errors. Probabilities for the transitions among various states of udder health (uninfected or subclinically or clinically infected) were calculated to account for exposure, heifer infection, spontaneous recovery, lactation cure, infection or cure during the dry period, month of lactation, parity, within-herd yields, and the number of quarters with clinical intramammary infection in the previous and current lactations. The stochastic simulation model was constructed using estimates from the literature and also using data from 164 herds enrolled with Quality Milk Promotion Services that each had bulk tank SCC between 500,000 and 750,000/ml. Model parameters and outputs were validated against a separate data file of 69 herds from the Northeast Dairy Herd Improvement Association, each with a bulk tank SCC that was > or = 500,000/ml. Sensitivity analysis was performed on all input parameters for control herds. Using the validated stochastic simulation model, the control herds had a stable time average bulk tank SCC between 500,000 and 750,000/ml.
Stochastic reservoir simulation for the modeling of uncertainty in coal seam degasification
Karacan, C. Özgen; Olea, Ricardo A.
2015-01-01
Coal seam degasification improves coal mine safety by reducing the gas content of coal seams and also by generating added value as an energy source. Coal seam reservoir simulation is one of the most effective ways to help with these two main objectives. As in all modeling and simulation studies, how the reservoir is defined and whether observed productions can be predicted are important considerations.
Stochastic synchronization of coupled neural networks with intermittent control
International Nuclear Information System (INIS)
Yang Xinsong; Cao Jinde
2009-01-01
In this Letter, we study the exponential stochastic synchronization problem for coupled neural networks with stochastic noise perturbations. Based on Lyapunov stability theory, inequality techniques, the properties of Weiner process, and adding different intermittent controllers, several sufficient conditions are obtained to ensure exponential stochastic synchronization of coupled neural networks with or without coupling delays under stochastic perturbations. These stochastic synchronization criteria are expressed in terms of several lower-dimensional linear matrix inequalities (LMIs) and can be easily verified. Moreover, the results of this Letter are applicable to both directed and undirected weighted networks. A numerical example and its simulations are offered to show the effectiveness of our new results.
Study on individual stochastic model of GNSS observations for precise kinematic applications
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis
2015-04-01
We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the
Study of the stochastic point reactor kinetic equation
International Nuclear Information System (INIS)
Gotoh, Yorio
1980-01-01
Diagrammatic technique is used to solve the stochastic point reactor kinetic equation. The method gives exact results which are derived from Fokker-Plank theory. A Green's function dressed with the clouds of noise is defined, which is a transfer function of point reactor with fluctuating reactivity. An integral equation for the correlation function of neutron power is derived using the following assumptions: 1) Green's funntion should be dressed with noise, 2) The ladder type diagrams only contributes to the correlation function. For a white noise and the one delayed neutron group approximation, the norm of the integral equation and the variance to mean-squared ratio are analytically obtained. (author)
Florian, Ehmele; Michael, Kunz
2016-04-01
Several major flood events occurred in Germany in the past 15-20 years especially in the eastern parts along the rivers Elbe and Danube. Examples include the major floods of 2002 and 2013 with an estimated loss of about 2 billion Euros each. The last major flood events in the State of Baden-Württemberg in southwest Germany occurred in the years 1978 and 1993/1994 along the rivers Rhine and Neckar with an estimated total loss of about 150 million Euros (converted) each. Flood hazard originates from a combination of different meteorological, hydrological and hydraulic processes. Currently there is no defined methodology available for evaluating and quantifying the flood hazard and related risk for larger areas or whole river catchments instead of single gauges. In order to estimate the probable maximum loss for higher return periods (e.g. 200 years, PML200), a stochastic model approach is designed since observational data are limited in time and space. In our approach, precipitation is linearly composed of three elements: background precipitation, orographically-induces precipitation, and a convectively-driven part. We use linear theory of orographic precipitation formation for the stochastic precipitation model (SPM), which is based on fundamental statistics of relevant atmospheric variables. For an adequate number of historic flood events, the corresponding atmospheric conditions and parameters are determined in order to calculate a probability density function (pdf) for each variable. This method involves all theoretically possible scenarios which may not have happened, yet. This work is part of the FLORIS-SV (FLOod RISk Sparkassen Versicherung) project and establishes the first step of a complete modelling chain of the flood risk. On the basis of the generated stochastic precipitation event set, hydrological and hydraulic simulations will be performed to estimate discharge and water level. The resulting stochastic flood event set will be used to quantify the
El-Khoury, O.; Kim, C.; Shafieezadeh, A.; Hur, J. E.; Heo, G. H.
2015-06-01
This study performs a series of numerical simulations and shake-table experiments to design and assess the performance of a nonlinear clipped feedback control algorithm based on optimal polynomial control (OPC) to mitigate the response of a two-span bridge equipped with a magnetorheological (MR) damper. As an extended conventional linear quadratic regulator, OPC provides more flexibility in the control design and further enhances system performance. The challenges encountered in this case are (1) the linearization of the nonlinear behavior of various components and (2) the selection of the weighting matrices in the objective function of OPC. The first challenge is addressed by using stochastic linearization which replaces the nonlinear portion of the system behavior with an equivalent linear time-invariant model considering the stochasticity in the excitation. Furthermore, a genetic algorithm is employed to find optimal weighting matrices for the control design. The input current to the MR damper installed between adjacent spans is determined using a clipped stochastic optimal polynomial control algorithm. The performance of the controlled system is assessed through a set of shake-table experiments for far-field and near-field ground motions. The proposed method showed considerable improvements over passive cases especially for the far-field ground motion.
International Nuclear Information System (INIS)
El-Khoury, O; Shafieezadeh, A; Hur, J E; Kim, C; Heo, G H
2015-01-01
This study performs a series of numerical simulations and shake-table experiments to design and assess the performance of a nonlinear clipped feedback control algorithm based on optimal polynomial control (OPC) to mitigate the response of a two-span bridge equipped with a magnetorheological (MR) damper. As an extended conventional linear quadratic regulator, OPC provides more flexibility in the control design and further enhances system performance. The challenges encountered in this case are (1) the linearization of the nonlinear behavior of various components and (2) the selection of the weighting matrices in the objective function of OPC. The first challenge is addressed by using stochastic linearization which replaces the nonlinear portion of the system behavior with an equivalent linear time-invariant model considering the stochasticity in the excitation. Furthermore, a genetic algorithm is employed to find optimal weighting matrices for the control design. The input current to the MR damper installed between adjacent spans is determined using a clipped stochastic optimal polynomial control algorithm. The performance of the controlled system is assessed through a set of shake-table experiments for far-field and near-field ground motions. The proposed method showed considerable improvements over passive cases especially for the far-field ground motion. (paper)
A Monte Carlo Study on Multiple Output Stochastic Frontiers
DEFF Research Database (Denmark)
Henningsen, Géraldine; Henningsen, Arne; Jensen, Uwe
, dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...
Experimental study of intrinsic stochasticity in magnetized plasma
International Nuclear Information System (INIS)
Anderegg, F.
1988-12-01
We present experimental results testing the application of the use of single particle hamiltonian theory to describe wave-particle interactions in a magnetized plasma. This work has been performed in a magnetized column of argon and barium. Neutralized ion Bernstein waves and electrostatic ion cyclotron waves are excited by external antenna and are obliquely propagating. Laser induced fluorescence and optical tagging are used to measure directly the ion distribution function and to track the ion motion. The linear ion response to electrostatic waves creates a perturbation of the ion distribution function. This perturbation is directly measured by the laser induced fluorescence technique allowing a direct measurement of the wave electric field, with the resonable assumption that the Vlasov theory is applicable. The nonlinear ion response to electrostatic waves, which occurs if the wave amplitude exceeds a threshold, is observed through a broadening of the ion distribution function and a fast diffusion in p z and in azimutal direction. Many predictions of the single particle theory are observed in the experiment. We have reported the first observation of a stochastic ion heating in a plasma. The threshold, the final form of the distribution function and the time scale are in good agreement with theoretical predictions. Moreover the existence of three constants of motion has been experimentally observed. Although many observations of particles nonlinear response agree with the nonselfconsistent theory, we have observed evidence for selfconsistent effects. The wavelength and the coupling of the excited wave change when the particles response is stochastic. One would have expected that the linear wave could be destructed by the particle chaotic motion nevertheless linear waves still exist in the plasma when particles follow chaotic trajectories. (author) 65 figs., 13 tabs., 77 refs
Workshop on quantum stochastic differential equations for the quantum simulation of physical systems
2016-09-22
that would be complimentary to the efforts at ARL. One the other hand, topological quantum field theories have a dual application to topological...Witten provided a path-integral definition of the Jones polynomial using a three-dimensional Chern-Simons quantum field theory (QFT) based on a non...topology, quantum field theory , quantum stochastic differential equations, quantum computing REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT
Dabaghi, Mayssa
2014-01-01
A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as ...
Energy Technology Data Exchange (ETDEWEB)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
International Nuclear Information System (INIS)
Rotariu, O; Strachan, N J C; Badescu, V
2004-01-01
The method of immunomagnetic separation (IMS) has become an established technique to concentrate and separate animal cells, biologically active compounds and pathogenic micro-organisms from clinical, food and environmental matrices. One drawback of this technique is that the analysis is only possible for small sample volumes. We have developed a stochastic model that involves numerical simulations to optimize the process of concentration of pathogenic micro-organisms onto superparamagnetic carrier particles (SCPs) in a gradient magnetic field. Within the range of the system parameters varied in the simulations, optimal conditions favour larger particles with higher magnetite concentrations. The dependence on magnetic field intensity and gradient together with concentration of particles and micro-organisms was found to be less important for larger SCPs but these parameters can influence the values of the collision time for small particles. These results will be useful in aiding the design of apparatus for immunomagnetic separation from large volume samples
Ross, Sheldon
2006-01-01
Ross's Simulation, Fourth Edition introduces aspiring and practicing actuaries, engineers, computer scientists and others to the practical aspects of constructing computerized simulation studies to analyze and interpret real phenomena. Readers learn to apply results of these analyses to problems in a wide variety of fields to obtain effective, accurate solutions and make predictions about future outcomes. This text explains how a computer can be used to generate random numbers, and how to use these random numbers to generate the behavior of a stochastic model over time. It presents the statist
Dynamics of non-holonomic systems with stochastic transport
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu
2013-07-22
To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Directory of Open Access Journals (Sweden)
Larry W. Burggraf
2013-07-01
Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.
Stochastic study on entrainment of floating particles with intake of cooling water of a power plant
International Nuclear Information System (INIS)
Kadoyu, Masatake; Wada, Akira
1979-01-01
The mortality of ichthyoplanktons, contained in the sea water passing through the cooling water systems of a power plant, may be associated with rising temperature and mechanical effect. In this study, the range and the rate of entrainment of the organisms like ichthyoplanktons floating in the sea caused by the intake of cooling water were stochastically investigated by simulating the average current as well as the flow caused by the intake of water and by taking into consideration random velocity fluctuation without these flows, using a mathematical model. An intake was set along the straight coastline in semi-infinite sea, and the rate of inflow of particles into the intake was simulated by a mathematical model. In the numerical simulation, the average flow as coastal current component and the flow caused by the intake of water were obtained with the hydrodynamic equations of motion and continuity, and the rate of entrainment of floating particles was examined by giving turbulence to the particles in the sea and by calculating the position of each particle every moment. The results are as follows; 1) The range of entrainment of floating particles by the intake of cooling water and its probability were obtained in consideration of the flow rate of cooling water, coast current velocity and diffusion coefficient as parameters. 2) The extent of inflow of floating particles considerably varied with tidal amplitude, diffusion coefficient and the flow rate of cooling water in the sea where the coastal flow has clear periodicity. 3) The extent of entrainment was considerably influenced by the steady current velocity, the velocity distribution in offshore direction and the intake volume in the sea where periodicity is not observed. (Nakai, Y.)
Stochastic Optimization Model to STudy the Operational Impacts of High Wind Penetrations in Ireland
DEFF Research Database (Denmark)
Meibom, Peter; Barth, R.; Hasche, B.
2011-01-01
A stochastic mixed integer linear optimization scheduling model minimizing system operation costs and treating load and wind power production as stochastic inputs is presented. The schedules are updated in a rolling manner as more up-to-date information becomes available. This is a fundamental...... change relative to day-ahead unit commitment approaches. The need for reserves dependent on forecast horizon and share of wind power has been estimated with a statistical model combining load and wind power forecast errors with scenarios of forced outages. The model is used to study operational impacts...
Stochastic stabilization of phenotypic States: the genetic bistable switch as a case study.
Weber, Marc; Buceta, Javier
2013-01-01
We study by means of analytical calculation and stochastic simulations how intrinsic noise modifies the bifurcation diagram of gene regulatory processes that can be effectively described by the Langevin formalism. In a general context, our study raises the intriguing question of how biochemical fluctuations redesign the epigenetic landscape in differentiation processes. We have applied our findings to a general class of regulatory processes that includes the simplest case that displays a bistable behavior and hence phenotypic variability: the genetic auto-activating switch. Thus, we explain why and how the noise promotes the stability of the low-state phenotype of the switch and show that the bistable region is extended when increasing the intensity of the fluctuations. This phenomenology is found in a simple one-dimensional model of the genetic switch as well as in a more detailed model that takes into account the binding of the protein to the promoter region. Altogether, we prescribe the analytical means to understand and quantify the noise-induced modifications of the bifurcation points for a general class of regulatory processes where the genetic bistable switch is included.
Stochastic calculus in physics
International Nuclear Information System (INIS)
Fox, R.F.
1987-01-01
The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
Energy Technology Data Exchange (ETDEWEB)
Zabaras, Nicolas J. [Cornell Univ., Ithaca, NY (United States)
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations
Antoine, Xavier; Duboscq, Romain
2015-08-01
GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.
Rare event simulation for stochastic fixed point equations related to the smoothing transform
DEFF Research Database (Denmark)
Collamore, Jeffrey F.; Vidyashankar, Anand N.; Xu, Jie
2013-01-01
In several applications arising in computer science, cascade theory, and other applied areas, it is of interest to evaluate the tail probabilities of non-homogeneous stochastic fixed point equations. Recently, techniques have been developed for the related linear recursions, yielding tail estimates...... and importance sampling methods for these recursions. However, such methods do not routinely generalize to non-homogeneous recursions. Drawing on techniques from the weighted branching process literature, we present a consistent, strongly efficient importance sampling algorithm for estimating the tail...
A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling
Aslam, Kamran
This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.
International Nuclear Information System (INIS)
Bendato, Ilaria; Cassettari, Lucia; Mosca, Marco; Mosca, Roberto
2016-01-01
Combining technological solutions with investment profitability is a critical aspect in designing both traditional and innovative renewable power plants. Often, the introduction of new advanced-design solutions, although technically interesting, does not generate adequate revenue to justify their utilization. In this study, an innovative methodology is developed that aims to satisfy both targets. On the one hand, considering all of the feasible plant configurations, it allows the analysis of the investment in a stochastic regime using the Monte Carlo method. On the other hand, the impact of every technical solution on the economic performance indicators can be measured by using regression meta-models built according to the theory of Response Surface Methodology. This approach enables the design of a plant configuration that generates the best economic return over the entire life cycle of the plant. This paper illustrates an application of the proposed methodology to the evaluation of design solutions using an innovative linear Fresnel Concentrated Solar Power system. - Highlights: • A stochastic methodology for solar plants investment evaluation. • Study of the impact of new technologies on the investment results. • Application to an innovative linear Fresnel CSP system. • A particular application of Monte Carlo simulation and response surface methodology.
Swillens, S; Champeil, P; Combettes, L; Dupont, G
1998-05-01
Confocal microscope studies with fluorescent dyes of inositol 1,4,5-trisphosphate (InsP3)-induced intracellular Ca2+ mobilization recently established the existence of 'elementary' events, dependent on the activity of individual InsP3-sensitive Ca2+ channels. In the present work, we try by theoretical stochastic simulation to explain the smallest signals observed in those studies, which were referred to as Ca2+ 'blips' [Parker I., Yao Y. Ca2+ transients associated with openings of inositol trisphosphate-gated channels in Xenopus oocytes. J Physiol Lond 1996; 491: 663-668]. For this purpose, we assumed a simple molecular model for the InsP3-sensitive Ca2+ channel and defined a set of parameter values accounting for the results obtained in electrophysiological bilayer experiments [Bezprozvanny I., Watras J., Ehrlich B.E. Bell-shaped calcium-response curves of Ins(1,4,5)P3- and calcium-gated channels from endoplasmic reticulum of cerebellum. Nature 1991; 351: 751-754; Bezprozvanny I., Ehrlich B.E. Inositol (1,4,5)-trisphosphate (InsP3)-gated Ca channels from cerebellum: conduction properties for divalent cations and regulation by intraluminal calcium. J Gen Physiol 1994; 104: 821-856]. With a stochastic procedure which considered cytosolic Ca2+ diffusion explicitly, we then simulated the behaviour of a single channel, placed in a realistic physiological environment. An attractive result was that the simulated channel exhibited bursts of activity, arising from repetitive channel openings, which were responsible for transient rises in Ca2+ concentration and were reminiscent of the relatively long-duration experimental Ca2+ blips. The influence of the values chosen for the various parameters (affinity and diffusion coefficient of the buffers, luminal Ca2+ concentration) on the kinetic characteristics of these theoretical blips is analyzed.
International Nuclear Information System (INIS)
El-Tawil, M A; Al-Jihany, A S
2008-01-01
In this paper, nonlinear oscillators under quadratic nonlinearity with stochastic inputs are considered. Different methods are used to obtain first order approximations, namely, the WHEP technique, the perturbation method, the Pickard approximations, the Adomian decompositions and the homotopy perturbation method (HPM). Some statistical moments are computed for the different methods using mathematica 5. Comparisons are illustrated through figures for different case-studies
Stochastic Parameter Development for PORFLOW Simulations of the Hanford AX Tank Farm
International Nuclear Information System (INIS)
Ho, C.K.; Baca, R.G.; Conrad, S.H.; Smith, G.A.; Shyr, L.; Wheeler, T.A.
1999-01-01
Parameters have been identified that can be modeled stochastically using PORFLOW and Latin Hypercube Sampling (LHS). These parameters include hydrologic and transport properties in the vadose and saturated zones, as well as source-term parameters and infiltration rates. A number of resources were used to define the parameter distributions, primarily those provided in the Retrieval Performance Evaluation Report (Jacobs, 1998). A linear rank regression was performed on the vadose-zone hydrologic parameters given in Khaleel and Freeman (1995) to determine if correlations existed between pairs of parameters. No strong correlations were found among the vadose-zone hydrologic parameters, and it was recommended that these parameters be sampled independently until future data or analyses reveal a strong correlation or functional relationship between parameters. Other distributions for source-term parameters, infiltration rates, and saturated-zone parameters that are required to stochastically analyze the performance of the AX Tank Farm using LHS/PORFLOW were adapted from distributions and values reported in Jacobs (1998) and other literature sources. Discussions pertaining to the geologic conceptualization, vadose-zone modeling, and saturated-zone modeling of the AX Tank Farm are also presented
Simulating extreme low-discharge events for the Rhine using a stochastic model
Macian-Sorribes, Hector; Mens, Marjolein; Schasfoort, Femke; Diermanse, Ferdinand; Pulido-Velazquez, Manuel
2017-04-01
The specific features of hydrological droughts make them more difficult to be analysed than other water-related phenomena: longer time scales (months to several years) so less historical events are available, and the drought severity and associate damage depends on a combination of variables with no clear prevalence (e.g., total water deficit, maximum deficit and duration). As part of drought risk analysis, which aims to provide insight into the variability of hydrological conditions and associated socio-economic impacts, long synthetic time series should therefore be developed. In this contribution, we increase the length of the available inflow time series using stochastic autoregressive modelling. This enhancement could improve the characterization of the extreme range and can define extreme droughts with similar periods of return but different patterns that can lead to distinctly different damages. The methodology consists of: 1) fitting an autoregressive model (AR, ARMA…) to the available records; 2) generating extended time series (thousands of years); 3) performing a frequency analysis with different characteristic variables (total, deficit, maximum deficit and so on); and 4) selecting extreme drought events associated with different characteristic variables and return periods. The methodology was applied to the Rhine river discharge at location Lobith, where the Rhine enters The Netherlands. A monthly ARMA(1,1) autoregressive model with seasonally varying parameters was fitted and successfully validated to the historical records available since year 1901. The maximum monthly deficit with respect to a threshold value of 1800 m3/s and the average discharge for a given time span in m3/s were chosen as indicators to identify drought periods. A synthetic series of 10,000 years of discharges was generated using the validated ARMA model. Two time spans were considered in the analysis: the whole calendar year and the half-year period between April and September
Energy Technology Data Exchange (ETDEWEB)
Karakulov, Valerii V., E-mail: valery@ftf.tsu.ru [National Research Tomsk State University, Tomsk, 634050 (Russian Federation); Smolin, Igor Yu., E-mail: smolin@ispms.ru, E-mail: skrp@ftf.tsu.ru; Skripnyak, Vladimir A., E-mail: smolin@ispms.ru, E-mail: skrp@ftf.tsu.ru [National Research Tomsk State University, Tomsk, 634050, Russia and Institute of Strength Physics and Materials Science SB RAS, Tomsk, 634055 (Russian Federation)
2014-11-14
Mechanical behavior of stochastic metal-ceramic composites with the aluminum matrix under high-rate deformation at shock-wave loading is numerically simulated with consideration for structural evolution. Effective values of mechanical parameters of metal-ceramic composites Al
Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale
Energy Technology Data Exchange (ETDEWEB)
Kevrekidis, Ioannis [Princeton Univ., NJ (United States)
2017-03-22
The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating the scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Stochastic simulation of power systems with integrated renewable and utility-scale storage resources
Degeilh, Yannick
.e., respond to quick variations in the loads and renewable resource outputs in a manner that maintains the power balance, by incorporating appropriate ramping requirement constraints in the formulation of the SOP. The simulation approach makes use of Monte Carlo simulation techniques to represent the impacts of the sources of uncertainty on the side-by-side power system and market operations. As such, we systematically sample the "input'' random processes -- namely the buyer demands, renewable resource outputs and conventional generation resource available capacities -- to generate the realizations, or sample paths, that we use in the emulation of the transmission-constrained day-ahead markets via SOP . As a result, we obtain realizations of the market outcomes and storage resource operations that we can use to approximate their statistics. The approach not only has the capability to emulate the side-by-side power system and energy market operations with the explicit representation of the chronology of time-dependent phenomena -- including storage cycles of charge/discharge -- and constraints imposed by the transmission network in terms of deliverability of the energy, but also to provide the figures of merit for all metrics to assess the economics, reliability and the environmental impacts of the performance of those operations. Our efforts to address the implementational aspects of the methodology so as to ensure computational tractability for large-scale systems over longer periods include relaxing the SOP, the use of a "warm-start'' technique as well as representative simulation periods, parallelization and variance reduction techniques. Our simulation approach is useful in power system planning, operations and investment analysis. There is a broad range of applications of the simulation methodology to resource planning studies, production costing issues, investment analysis, transmission utilization, reliability analysis, environmental assessments, policy formulation and
Directory of Open Access Journals (Sweden)
Bailing Liu
2015-01-01
Full Text Available Facility location, inventory control, and vehicle routes scheduling are three key issues to be settled in the design of logistics system for e-commerce. Due to the online shopping features of e-commerce, customer returns are becoming much more than traditional commerce. This paper studies a three-phase supply chain distribution system consisting of one supplier, a set of retailers, and a single type of product with continuous review (Q, r inventory policy. We formulate a stochastic location-inventory-routing problem (LIRP model with no quality defects returns. To solve the NP-hand problem, a pseudo-parallel genetic algorithm integrating simulated annealing (PPGASA is proposed. The computational results show that PPGASA outperforms GA on optimal solution, computing time, and computing stability.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Energy Technology Data Exchange (ETDEWEB)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
International Nuclear Information System (INIS)
Purnama, Budi; Koga, Masashi; Nozaki, Yukio; Matsuyama, Kimihide
2009-01-01
Thermally assisted magnetization reversal of sub-100 nm dots with perpendicular anisotropy has been investigated using a micromagnetic Langevin model. The performance of the two different reversal modes of (i) a reduced barrier writing scheme and (ii) a Curie point writing scheme are compared. For the reduced barrier writing scheme, the switching field H swt decreases with an increase in writing temperature but is still larger than that of the Curie point writing scheme. For the Curie point writing scheme, the required threshold field H th , evaluated from 50 simulation results, saturates at a value, which is not simply related to the energy barrier height. The value of H th increases with a decrease in cooling time owing to the dynamic aspects of the magnetic ordering process. Dependence of H th on material parameters and dot sizes has been systematically studied
International Nuclear Information System (INIS)
Bisognano, J.; Leemann, C.
1982-03-01
Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron
Kikkinides, E S; Steriotis, T A; Kanellopoulos, N K; Mitropoulos, A C; Treimer, W
2002-01-01
Ceramic nanostructured materials have recently received scientific and industrial interest due to their unique properties. A series of such nanoporous structures were characterised by SANS techniques. The resulting scattering curves were analysed to obtain basic structural information regarding the pore size distribution and autocorrelation function of each material. Furthermore, stochastic reconstruction models were employed to generate 3D images with the same basic structural characteristics obtained from SANS. Finally, simulation results of permeation on the reconstructed images provide very good agreement with experimental data. (orig.)
International Nuclear Information System (INIS)
Lee, J.H.; Atkins, J.E.; Andrews, R.W.
1995-01-01
A detailed stochastic waste package degradation simulation model was developed incorporating the humid-air and aqueous general and pitting corrosion models for the carbon steel corrosion-allowance outer barrier and aqueous pitting corrosion model for the Alloy 825 corrosion-resistant inner barrier. The uncertainties in the individual corrosion models were also incorporated to capture the variability in the corrosion degradation among waste packages and among pits in the same waste package. Within the scope of assumptions employed in the simulations, the corrosion modes considered, and the near-field conditions from the drift-scale thermohydrologic model, the results of the waste package performance analyses show that the current waste package design appears to meet the 'controlled design assumption' requirement of waste package performance, which is currently defined as having less than 1% of waste packages breached at 1,000 years. It was shown that, except for the waste packages that fail early, pitting corrosion of the corrosion-resistant inner barrier has a greater control on the failure of waste packages and their subsequent degradation than the outer barrier. Further improvement and substantiation of the inner barrier pitting model (currently based on an elicitation) is necessary in future waste package performance simulation model
Kareiva, Peter; Morse, Douglass H; Eccleston, Jill
1989-03-01
We compared the patch-choice performances of an ambush predator, the crab spider Misumena vatia (Thomisidae) hunting on common milkweed Asclepias syriaca (Asclepiadaceae) umbles, with two stochastic rule-of-thumb simulation models: one that employed a threshold giving-up time and one that assumed a fixed probability of moving. Adult female Misumena were placed on milkweed plants with three umbels, each with markedly different numbers of flower-seeking prey. Using a variety of visitation regimes derived from observed visitation patterns of insect prey, we found that decreases in among-umbel variance in visitation rates or increases in overall mean visitation rates reduced the "clarity of the optimum" (the difference in the yield obtained as foraging behavior changes), both locally and globally. Yield profiles from both models were extremely flat or jagged over a wide range of prey visitation regimes; thus, differences between optimal and "next-best" strategies differed only modestly over large parts of the "foraging landscape". Although optimal yields from fixed probability simulations were one-third to one-half those obtained from threshold simulations, spiders appear to depart umbels in accordance with the fixed probability rule.
Energy Technology Data Exchange (ETDEWEB)
Dai, S. [National Institute for Fusion Science, Toki (Japan); Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian (China); Kobayashi, M.; Morita, S.; Oishi, T.; Suzuki, Y. [National Institute for Fusion Science, Toki (Japan); Department of Fusion Science, School of Physical Sciences, SOKENDAI (The Graduate University for Advanced Studies), Toki (Japan); Kawamura, G. [National Institute for Fusion Science, Toki (Japan); Zhang, H.M.; Huang, X.L. [Department of Fusion Science, School of Physical Sciences, SOKENDAI (The Graduate University for Advanced Studies), Toki (Japan); Feng, Y. [Max-Planck-Institut fuer Plasmaphysik, Greifswald (Germany); Wang, D.Z. [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian (China); Collaboration: The LHD experiment group
2016-08-15
The transport properties and line emissions of the intrinsic carbon in the stochastic layer of the Large Helical Device have been investigated with the three-dimensional edge transport code EMC3-EIRENE. The simulations of impurity transport and emissivity have been performed to study the dedicated experiment in which the carbon emission distributions are measured by a space-resolved EUV spectrometer system. A discrepancy of the CIV impurity emission between the measurement and simulation is obtained, which is studied with the variation of the ion thermal force, friction force and the perpendicular diffusivity in the impurity transport model. An enhanced ion thermal force or a reduced friction force in the modelling can increase the CIV impurity emission at the inboard X-point region. Furthermore, the impact of the perpendicular diffusivity Dimp is studied which shows that the CIV impurity emission pattern is very sensitive to Dimp. It is found that the simulation results with the increased Dimp tend to be closer to the experimental observation. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
International Nuclear Information System (INIS)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Ogawa, Shigeyoshi
2017-01-01
This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...
International Nuclear Information System (INIS)
Birdsell, K.H.; Campbell, K.; Eggert, K.; Travis, B.J.
1990-01-01
This paper presents preliminary transport calculations for radionuclide movement at Yucca Mountain. Several different realizations of spatially distributed sorption coefficients are used to study the sensitivity of radionuclide migration. These sorption coefficients are assumed to be functions of the mineralogic assemblages of the underlying rock. The simulations were run with TRACRN 1 , a finite-difference porous flow and radionuclide transport code developed for the Yucca Mountain Project. Approximately 30,000 nodes are used to represent the unsaturated and saturated zones underlying the repository in three dimensions. Transport calculations for a representative radionuclide cation, 135 Cs, and anion, 99 Tc, are presented. Calculations such as these will be used to study the effectiveness of the site's geochemical barriers at a mechanistic level and to help guide the geochemical site characterization program. The preliminary calculations should be viewed as a demonstration of the modeling methodology rather than as a study of the effectiveness of the geochemical barriers. The model provides a method for examining the integration of flow scenarios with transport and retardation processes as currently understood for the site. The effects on transport of many of the processes thought to be active at Yucca Mountain may be examined using this approach. 11 refs., 14 figs., 1 tab
Stochastic resonance in bistable systems driven by harmonic noise
International Nuclear Information System (INIS)
Neiman, A.; Schimansky-Geier, L.
1994-01-01
We study stochastic resonance in a bistable system which is excited simultaneously by white and harmonic noise which we understand as the signal. In our case the spectral line of the signal has a finite width as it occurs in many real situations. Using techniques of cumulant analysis as well as computer simulations we find that the effect of stochastic resonance is preserved in the case of harmonic noise excitation. Moreover we show that the width of the spectral line of the signal at the output can be decreased via stochastic resonance. The last could be of importance in the practical using of the stochastic resonance
Risk-based transfer responses to climate change, simulated through autocorrelated stochastic methods
Kirsch, B.; Characklis, G. W.
2009-12-01
Maintaining municipal water supply reliability despite growing demands can be achieved through a variety of mechanisms, including supply strategies such as temporary transfers. However, much of the attention on transfers has been focused on market-based transfers in the western United States largely ignoring the potential for transfers in the eastern U.S. The different legal framework of the eastern and western U.S. leads to characteristic differences between their respective transfers. Western transfers tend to be agricultural-to-urban and involve raw, untreated water, with the transfer often involving a simple change in the location and/or timing of withdrawals. Eastern transfers tend to be contractually established urban-to-urban transfers of treated water, thereby requiring the infrastructure to transfer water between utilities. Utilities require the tools to be able to evaluate transfer decision rules and the resulting expected future transfer behavior. Given the long-term planning horizons of utilities, potential changes in hydrologic patterns due to climate change must be considered. In response, this research develops a method for generating a stochastic time series that reproduces the historic autocorrelation and can be adapted to accommodate future climate scenarios. While analogous in operation to an autoregressive model, this method reproduces the seasonal autocorrelation structure, as opposed to assuming the strict stationarity produced by an autoregressive model. Such urban-to-urban transfers are designed to be rare, transient events used primarily during times of severe drought, and incorporating Monte Carlo techniques allows for the development of probability distributions of likely outcomes. This research evaluates a system risk-based, urban-to-urban transfer agreement between three utilities in the Triangle region of North Carolina. Two utilities maintain their own surface water supplies in adjoining watersheds and look to obtain transfers via
International Nuclear Information System (INIS)
Kuhn, W.L.; Westsik, J.H. Jr.
1989-01-01
Processing steps during the conversion of high-level nuclear waste into borosilicate glass in the Hanford Waste Vitrification Plant are being simulated on a computer by addressing transient mass balances. The results are being used to address the US Department of Energy's Waste Form Qualification requirements. The simulated addresses discontinuous (batch) operations and perturbations in the transient behavior of the process caused by errors in measurements and control actions. A collection of tests, based on process measurements, is continually checked and used to halt the simulated process when specified conditions are met. An associated set of control actions is then implemented in the simulation. The results for an example simulation are shown. 8 refs
Gross, Markus
2018-03-01
A fluctuating interfacial profile in one dimension is studied via Langevin simulations of the Edwards–Wilkinson equation with non-conserved noise and the Mullins–Herring equation with conserved noise. The profile is subject to either periodic or Dirichlet (no-flux) boundary conditions. We determine the noise-driven time-evolution of the profile between an initially flat configuration and the instant at which the profile reaches a given height M for the first time. The shape of the averaged profile agrees well with the prediction of weak-noise theory (WNT), which describes the most-likely trajectory to a fixed first-passage time. Furthermore, in agreement with WNT, on average the profile approaches the height M algebraically in time, with an exponent that is essentially independent of the boundary conditions. However, the actual value of the dynamic exponent turns out to be significantly smaller than predicted by WNT. This ‘renormalization’ of the exponent is explained in terms of the entropic repulsion exerted by the impenetrable boundary on the fluctuations of the profile around its most-likely path. The entropic repulsion mechanism is analyzed in detail for a single (fractional) Brownian walker, which describes the anomalous diffusion of a tagged monomer of the interface as it approaches the absorbing boundary. The present study sheds light on the accuracy and the limitations of the weak-noise approximation for the description of the full first-passage dynamics.
Eichhorn, Ralf; Aurell, Erik
2014-04-01
many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
International Nuclear Information System (INIS)
Franke, B.C.; Kensek, R.P.; Prinja, A.K.
2013-01-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative 'condensed transport' formulation, a Generalized Boltzmann-Fokker-Planck (GBFP) method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations. (authors)
Stochastic volatility and stochastic leverage
DEFF Research Database (Denmark)
Veraart, Almut; Veraart, Luitgard A. M.
This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...
A Langevin Canonical Approach to the Study of Quantum Stochastic Resonance in Chiral Molecules
Directory of Open Access Journals (Sweden)
Germán Rojas-Lorenzo
2016-09-01
Full Text Available A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification of the response to an external periodic signal at a certain value of the noise strength, being a cooperative effect of friction, noise, and periodic driving occurring in a bistable system. Furthermore, from this stochastic dynamics within the Markovian regime and Ohmic friction, the competing process between tunneling and the parity violating energy difference present in this type of chiral systems plays a fundamental role. This mechanism is finally proposed to observe the so-far elusive parity-violating energy difference in chiral molecules.
Directory of Open Access Journals (Sweden)
MANFREDI, P.
2014-11-01
Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.
Reliability estimation of structures under stochastic loading—A case study on nuclear piping
International Nuclear Information System (INIS)
Hari Prasad, M.; Rami Reddy, G.; Dubey, P.N.; Srividya, A.; Verma, A.K.
2013-01-01
Highlights: ► Structures are generally subjected to different types of loadings. ► One such type of loading is random sequence and has been treated as a stochastic fatigue loading. ► In this methodology both stress amplitude and number of cycles to failure have been considered as random variables. ► The methodology has been demonstrated with a case study on nuclear piping. ► The failure probability of piping has been estimated as a function of time. - Abstract: Generally structures are subjected to different types of loadings throughout their life time. These loads can be either discrete in nature or continuous in nature and also these can be either stationary or non stationary processes. This means that the structural reliability analysis not only considers random variables but also considers random variables which are functions of time, referred to as stochastic processes. A stochastic process can be viewed as a family of random variables. When a structure is subjected to a random loading, based on the stresses developed in the structure and failure criteria the failure probability can be estimated. In practice the structures are designed with higher factor of safety to take care of such random loads. In such cases the structure will fail only when the random loads are cyclic in nature. In traditional reliability analysis, the variation in the load is treated as a random variable and to account for the number of occurrences of the loading the concept of extreme value theory is used. But with this method one is neglecting the damage accumulation that will take place from one loading to another loading. Hence, in this paper, a new way of dealing with these types of problems has been discussed by using the concept of stochastic fatigue loading. The random loading has been considered as earthquake loading. The methodology has been demonstrated with a case study on nuclear power plant piping.
Directory of Open Access Journals (Sweden)
F. Hossain
2004-01-01
Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.
Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.
2017-12-01
Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in
Stochastic Plume Simulations for the Fukushima Accident and the Deep Water Horizon Oil Spill
Coelho, E.; Peggion, G.; Rowley, C.; Hogan, P.
2012-04-01
The Fukushima Dai-ichi power plant suffered damage leading to radioactive contamination of coastal waters. Major issues in characterizing the extent of the affected waters were a poor knowledge of the radiation released to the coastal waters and the rather complex coastal dynamics of the region, not deterministically captured by the available prediction systems. Equivalently, during the Gulf of Mexico Deep Water Horizon oil platform accident in April 2010, significant amounts of oil and gas were released from the ocean floor. For this case, issues in mapping and predicting the extent of the affected waters in real-time were a poor knowledge of the actual amounts of oil reaching the surface and the fact that coastal dynamics over the region were not deterministically captured by the available prediction systems. To assess the ocean regions and times that were most likely affected by these accidents while capturing the above sources of uncertainty, ensembles of the Navy Coastal Ocean Model (NCOM) were configured over the two regions (NE Japan and Northern Gulf of Mexico). For the Fukushima case tracers were released on each ensemble member; their locations at each instant provided reference positions of water volumes where the signature of water released from the plant could be found. For the Deep Water Horizon oil spill case each ensemble member was coupled with a diffusion-advection solution to estimate possible scenarios of oil concentrations using perturbed estimates of the released amounts as the source terms at the surface. Stochastic plumes were then defined using a Risk Assessment Code (RAC) analysis that associates a number from 1 to 5 to each grid point, determined by the likelihood of having tracer particle within short ranges (for the Fukushima case), hence defining the high risk areas and those recommended for monitoring. For the Oil Spill case the RAC codes were determined by the likelihood of reaching oil concentrations as defined in the Bonn Agreement
Energy Technology Data Exchange (ETDEWEB)
Kitteroed, Nils-Otto
1997-12-31
The background for this thesis was the increasing risk of contamination of water resources and the requirement of groundwater protection. Specifically, the thesis implements procedures to estimate and simulate observed heterogeneities in the unsaturated zone and evaluates what impact the heterogeneities may have on the water flow. The broad goal was to establish a reference model with high spatial resolution within a small area and to condition the model using spatially frequent field observations, and the Moreppen site at Oslo`s new major airport was used for this purpose. An approach is presented for the use of ground penetrating radar in which indicator kriging is used to estimate continuous stratigraphical architecture. Kriging is also used to obtain 3D images of soil moisture. A simulation algorithm based on the Karhunen-Loeve expansion is evaluated and a modification of the Karhunen-Loeve simulation is suggested that makes it possible to increase the size of the simulation lattice. This is obtained by kriging interpolation of the eigenfunctions. 250 refs., 40 figs., 7 tabs.
Stochastic simulation of large grids using free and public domain software
Bruin, de S.; Wit, de A.J.W.
2005-01-01
This paper proposes a tiled map procedure enabling sequential indicator simulation on grids consisting of several tens of millions of cells, without putting excessive memory requirements. Spatial continuity across map tiles is handled by conditioning adjacent tiles on their shared boundaries. Tiles
International Nuclear Information System (INIS)
Kung Chen Shan; Wen Xian Huan; Cvetkovic, V.; Winberg, A.
1992-06-01
The non-parametric and parametric stochastic continuum approaches were applied to a realistic synthetic exhaustive hydraulic conductivity field to study the effects of hard and soft conditioning. From the reference domain, a number of data points were selected, either in a random or designed fashion, to form sample data sets. Based on established experimental variograms and the conditioning data, 100 realizations each of the studied domain were generated. The flow field was calculated for each realization, and particle arrival time and arrival position along the discharge boundary were evaluated. It was shown that conditioning on soft data reduces the uncertainty of solute arrival time, and that conditioning on soft data suggests an improvement in characterizing channeling effects. It was found that the improvement in the prediction of the breakthrough was moderate when conditioning on 25 hard and 100 soft data compared to 25 hard data only. (au)
Energy Technology Data Exchange (ETDEWEB)
Jung, Gerhard, E-mail: jungge@uni-mainz.de; Schmid, Friederike, E-mail: friederike.schmid@uni-mainz.de [Institut für Physik, Johannes Gutenberg-Universität Mainz, Staudingerweg 9, D-55099 Mainz (Germany)
2016-05-28
Exact values for bulk and shear viscosity are important to characterize a fluid, and they are a necessary input for a continuum description. Here we present two novel methods to compute bulk viscosities by non-equilibrium molecular dynamics simulations of steady-state systems with periodic boundary conditions — one based on frequent particle displacements and one based on the application of external bulk forces with an inhomogeneous force profile. In equilibrium simulations, viscosities can be determined from the stress tensor fluctuations via Green-Kubo relations; however, the correct incorporation of random and dissipative forces is not obvious. We discuss different expressions proposed in the literature and test them at the example of a dissipative particle dynamics fluid.
Tam, Vincent H; Kabbara, Samer
2006-10-01
Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.
Extending Stochastic Network Calculus to Loss Analysis
Directory of Open Access Journals (Sweden)
Chao Luo
2013-01-01
Full Text Available Loss is an important parameter of Quality of Service (QoS. Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.
2016-10-01
The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.
Singular stochastic differential equations
Cherny, Alexander S
2005-01-01
The authors introduce, in this research monograph on stochastic differential equations, a class of points termed isolated singular points. Stochastic differential equations possessing such points (called singular stochastic differential equations here) arise often in theory and in applications. However, known conditions for the existence and uniqueness of a solution typically fail for such equations. The book concentrates on the study of the existence, the uniqueness, and, what is most important, on the qualitative behaviour of solutions of singular stochastic differential equations. This is done by providing a qualitative classification of isolated singular points, into 48 possible types.
Using stochastic activity networks to study the energy feasibility of automatic weather stations
Energy Technology Data Exchange (ETDEWEB)
Cassano, Luca [Dipartimento di Elettronica, Informatica e Bioingegneria, Politecnico di Milano (Italy); Cesarini, Daniel [Scuola Superiore Sant’Anna, Pisa (Italy); Avvenuti, Marco [Dipartimento di Ingegneria dell’Informazione, University of Pisa (Italy)
2015-03-10
Automatic Weather Stations (AWSs) are systems equipped with a number of environmental sensors and communication interfaces used to monitor harsh environments, such as glaciers and deserts. Designing such systems is challenging, since designers have to maximize the amount of sampled and transmitted data while considering the energy needs of the system that, in most cases, is powered by rechargeable batteries and exploits energy harvesting, e.g., solar cells and wind turbines. To support designers of AWSs in the definition of the software tasks and of the hardware configuration of the AWS we designed and implemented an energy-aware simulator of such systems. The simulator relies on the Stochastic Activity Networks (SANs) formalism and has been developed using the Möbius tool. In this paper we first show how we used the SAN formalism to model the various components of an AWS, we then report results from an experiment carried out to validate the simulator against a real-world AWS and we finally show some examples of usage of the proposed simulator.
Lanchier, Nicolas
2017-01-01
Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...
Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu
2014-01-01
The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.
Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu
2014-01-01
The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.
Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation
Directory of Open Access Journals (Sweden)
Yi Wu
2010-02-01
Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.
Nonlinear Stochastic stability analysis of Wind Turbine Wings by Monte Carlo Simulations
DEFF Research Database (Denmark)
Larsen, Jesper Winther; Iwankiewiczb, R.; Nielsen, Søren R.K.
2007-01-01
and inertial contributions. A reduced two-degrees-of-freedom modal expansion is used specifying the modal coordinate of the fundamental blade and edgewise fixed base eigenmodes of the beam. The rotating beam is subjected to harmonic and narrow-banded support point motion from the nacelle displacement...... under narrow-banded excitation, and it is shown that the qualitative behaviour of the strange attractor is very similar for the periodic and almost periodic responses, whereas the strange attractor for the chaotic case loses structure as the excitation becomes narrow-banded. Furthermore......, the characteristic behaviour of the strange attractor is shown to be identifiable by the so-called information dimension. Due to the complexity of the coupled nonlinear structural system all analyses are carried out via Monte Carlo simulations....
International Nuclear Information System (INIS)
Ye, Ming; Pan, Feng; Hu, Xiaolong; Zhu, Jianting
2007-01-01
Yucca Mountain has been proposed by the U.S. Department of Energy as the nation's long-term, permanent geologic repository for spent nuclear fuel or high-level radioactive waste. The potential repository would be located in Yucca Mountain's unsaturated zone (UZ), which acts as a critical natural barrier delaying arrival of radionuclides to the water table. Since radionuclide transport in groundwater can pose serious threats to human health and the environment, it is important to understand how much and how fast water and radionuclides travel through the UZ to groundwater. The UZ system consists of multiple hydrogeologic units whose hydraulic and geochemical properties exhibit systematic and random spatial variation, or heterogeneity, at multiple scales. Predictions of radionuclide transport under such complicated conditions are uncertain, and the uncertainty complicates decision making and risk analysis. This project aims at using geostatistical and stochastic methods to assess uncertainty of unsaturated flow and radionuclide transport in the UZ at Yucca Mountain. Focus of this study is parameter uncertainty of hydraulic and transport properties of the UZ. The parametric uncertainty arises since limited parameter measurements are unable to deterministically describe spatial variability of the parameters. In this project, matrix porosity, permeability and sorption coefficient of the reactive tracer (neptunium) of the UZ are treated as random variables. Corresponding propagation of parametric uncertainty is quantitatively measured using mean, variance, 5th and 95th percentiles of simulated state variables (e.g., saturation, capillary pressure, percolation flux, and travel time). These statistics are evaluated using a Monte Carlo method, in which a three-dimensional flow and transport model implemented using the TOUGH2 code is executed with multiple parameter realizations of the random model parameters. The project specifically studies uncertainty of unsaturated flow
DEFF Research Database (Denmark)
Pang, Kar Mun; Jangi, Mehdi; Bai, X.-S.
generated similar results. The principal motivation for ESF compared to Lagrangian particle based PDF is the relative ease of implementation of the former into Eulerian computational fluid dynamics(CFD) codes [5]. Several works have attempted to implement the ESF model for the simulations of diesel spray......The use of transported Probability Density Function(PDF) methods allows a single model to compute the autoignition, premixed mode and diffusion flame of diesel combustion under engine-like conditions [1,2]. The Lagrangian particle based transported PDF models have been validated across a wide range...... combustion under engine-like conditions.The current work aims to further evaluate the performance of the ESF model in this application, with an emphasis on examining the convergence of the number of stochastic fields, nsf. Five test conditions, covering both the conventional diesel combustion and low...
Stochastic models for atmospheric dispersion
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
2003-01-01
Simple stochastic differential equation models have been applied by several researchers to describe the dispersion of tracer particles in the planetary atmospheric boundary layer and to form the basis for computer simulations of particle paths. To obtain the drift coefficient, empirical vertical...... positions close to the boundaries. Different rules have been suggested in the literature with justifications based on simulation studies. Herein the relevant stochastic differential equation model is formulated in a particular way. The formulation is based on the marginal transformation of the position...... velocity distributions that depend on height above the ground both with respect to standard deviation and skewness are substituted into the stationary Fokker/Planck equation. The particle position distribution is taken to be uniform *the well/mixed condition( and also a given dispersion coefficient...
Modelling and Analysis of Smart Grid: A Stochastic Model Checking Case Study
DEFF Research Database (Denmark)
Yuksel, Ender; Zhu, Huibiao; Nielson, Hanne Riis
2012-01-01
that require novel methods and applications. In this context, an important issue is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese Smart Grid implementation as a case study and address the verification problem for performance and energy......Cyber-physical systems integrate information and communication technology functions to the physical elements of a system for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... consumption. We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....
Quantitative modelling and analysis of a Chinese smart grid: a stochastic model checking case study
DEFF Research Database (Denmark)
Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming
2014-01-01
Cyber-physical systems integrate information and communication technology with the physical elements of a system, mainly for monitoring and controlling purposes. The conversion of traditional power grid into a smart grid, a fundamental example of a cyber-physical system, raises a number of issues...... that require novel methods and applications. One of the important issues in this context is the verification of certain quantitative properties of the system. In this paper, we consider a specific Chinese smart grid implementation as a case study and address the verification problem for performance and energy...... consumption.We employ stochastic model checking approach and present our modelling and analysis study using PRISM model checker....
Remaining useful life estimation based on stochastic deterioration models: A comparative study
International Nuclear Information System (INIS)
Le Son, Khanh; Fouladirad, Mitra; Barros, Anne; Levrat, Eric; Iung, Benoît
2013-01-01
Prognostic of system lifetime is a basic requirement for condition-based maintenance in many application domains where safety, reliability, and availability are considered of first importance. This paper presents a probabilistic method for prognostic applied to the 2008 PHM Conference Challenge data. A stochastic process (Wiener process) combined with a data analysis method (Principal Component Analysis) is proposed to model the deterioration of the components and to estimate the RUL on a case study. The advantages of our probabilistic approach are pointed out and a comparison with existing results on the same data is made
Stochastic Wake Modelling Based on POD Analysis
Directory of Open Access Journals (Sweden)
David Bastine
2018-03-01
Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.
Conducting Simulation Studies in Psychometrics
Feinberg, Richard A.; Rubright, Jonathan D.
2016-01-01
Simulation studies are fundamental to psychometric discourse and play a crucial role in operational and academic research. Yet, resources for psychometricians interested in conducting simulations are scarce. This Instructional Topics in Educational Measurement Series (ITEMS) module is meant to address this deficiency by providing a comprehensive…
Directory of Open Access Journals (Sweden)
Akke Kok
Full Text Available Shortening or omitting the dry period of dairy cows improves metabolic health in early lactation and reduces management transitions for dairy cows. The success of implementation of these strategies depends on their impact on milk yield and farm profitability. Insight in these impacts is valuable for informed decision-making by farmers. The aim of this study was to investigate how shortening or omitting the dry period of dairy cows affects production and cash flows at the herd level, and greenhouse gas emissions per unit of milk, using a dynamic stochastic simulation model. The effects of dry period length on milk yield and calving interval assumed in this model were derived from actual performance of commercial dairy cows over multiple lactations. The model simulated lactations, and calving and culling events of individual cows for herds of 100 cows. Herds were simulated for 5 years with a dry period of 56 (conventional, 28 or 0 days (n = 50 herds each. Partial cash flows were computed from revenues from sold milk, calves, and culled cows, and costs from feed and rearing youngstock. Greenhouse gas emissions were computed using a life cycle approach. A dry period of 28 days reduced milk production of the herd by 3.0% in years 2 through 5, compared with a dry period of 56 days. A dry period of 0 days reduced milk production by 3.5% in years 3 through 5, after a dip in milk production of 6.9% in year 2. On average, dry periods of 28 and 0 days reduced partial cash flows by €1,249 and €1,632 per herd per year, and increased greenhouse gas emissions by 0.7% and 0.5%, respectively. Considering the potential for enhancing cow welfare, these negative impacts of shortening or omitting the dry period seem justifiable, and they might even be offset by improved health.
Directory of Open Access Journals (Sweden)
Allan Saul
Full Text Available BACKGROUND: Typhoid fever caused by Salmonella enterica serovar Typhi (S. Typhi remains a serious burden of disease, especially in developing countries of Asia and Africa. It is estimated that it causes 200,000 deaths per year, mainly in children. S. Typhi is an obligate pathogen of humans and although it has a relatively complex life cycle with a long lived carrier state, the absence of non-human hosts suggests that well targeted control methods should have a major impact on disease. Newer control methods including new generations of vaccines offer hope but their implementation would benefit from quantitative models to guide the most cost effective strategies. This paper presents a quantitative model of Typhoid disease, immunity and transmission as a first step in that process. METHODOLOGY/PRINCIPAL FINDINGS: A stochastic agent-based model has been developed that incorporates known features of the biology of typhoid including probability of infection, the consequences of infection, treatment options, acquisition and loss of immunity as a result of infection and vaccination, the development of the carrier state and the impact of environmental or behavioral factors on transmission. The model has been parameterized with values derived where possible from the literature and where this was not possible, feasible parameters space has been determined by sensitivity analyses, fitting the simulations to age distribution of field data. The model is able to adequately predict the age distribution of typhoid in two settings. CONCLUSIONS/SIGNIFICANCE: The modeling highlights the importance of variations in the exposure/resistance of infants and young children to infection in different settings, especially as this impacts on design of control programs; it predicts that naturally induced clinical and sterile immunity to typhoid is long lived and highlights the importance of the carrier state especially in areas of low transmission.
Study of stochastic approaches of the n-bodies problem: application to the nuclear fragmentation
International Nuclear Information System (INIS)
Guarnera, A.
1996-01-01
In the last decade nuclear physics research has found, with the observation of phenomena such as multifragmentation or vaporization, the possibility to get a deeper insight into the nuclear matter phase diagram. For example, a spinodal decomposition scenario has been proposed to explain the multifragmentation: because of the initial compression, the system may enter a region, the spinodal zone, in which the nuclear matter is no longer stable, and so any fluctuation leads to the formation of fragments. This thesis deals with spinodal decomposition within the theoretical framework of stochastic mean filed approaches, in which the one-body density function may experience a stochastic evolution. We have shown that these approaches are able to describe phenomena, such as first order phase transitions, in which fluctuations and many-body correlations plan an important role. In the framework of stochastic mean-filed approaches we have shown that the fragment production by spinodal decomposition is characterized by typical time scales of the order of 100 fm/c and by typical size scales around the Neon mass. We have also shown that these features are robust and that they are not affected significantly by a possible expansion of the system or by the finite size of nuclei. We have proposed as a signature of the spinodal decomposition some typical partition of the largest fragments. The study and the comparison with experimental data, performed for the reactions Xe + Cu at 45 MeV/A and Xe + Sn at 50 MeV/A, have shown a remarkable agreement. Moreover we would like to stress that the theory does not contain any adjustable parameter. These results seem to give a strong indication of the possibility to observe a spinodal decomposition of nuclei. (author)
Simulation in International Studies
Boyer, Mark A.
2011-01-01
Social scientists have long worked to replicate real-world phenomena in their research and teaching environments. Unlike our biophysical science colleagues, we are faced with an area of study that is not governed by the laws of physics and other more predictable relationships. As a result, social scientists, and international studies scholars more…
Directory of Open Access Journals (Sweden)
Wang Yajun
2008-12-01
Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.
Mostert, P.F.; Bokkers, E.A.M.; Middelaar, van C.E.; Hogeveen, H.; Boer, de I.J.M.
2018-01-01
The objective of this study was to estimate the economic impact of subclinical ketosis (SCK) in dairy cows. This metabolic disorder occurs in the period around calving and is associated with an increased risk of other diseases. Therefore, SCK affects farm productivity and profitability.
Stochastic processes and quantum theory
International Nuclear Information System (INIS)
Klauder, J.R.
1975-01-01
The author analyses a variety of stochastic processes, namely real time diffusion phenomena, which are analogues of imaginary time quantum theory and convariant imaginary time quantum field theory. He elaborates some standard properties involving probability measures and stochastic variables and considers a simple class of examples. Finally he develops the fact that certain stochastic theories actually exhibit divergences that simulate those of covariant quantum field theory and presents examples of both renormaizable and unrenormalizable behavior. (V.J.C.)
Energy Technology Data Exchange (ETDEWEB)
Mathies, M; Eisfeld, K; Paretzke, H; Wirth, E [Gesellschaft fuer Strahlen- und Umweltforschung m.b.H. Muenchen, Neuherberg (Germany, F.R.). Inst. fuer Strahlenschutz
1981-05-01
The effects of introducing probability distributions of the parameters in radionuclide transport models are investigated. Results from a Monte-Carlo simulation were presented for the transport of /sup 137/Cs via the pasture-cow-milk pathway, taking into the account the uncertainties and naturally occurring fluctuations in the rate constants. The results of the stochastic model calculations characterize the activity concentrations at a given time t and provide a great deal more information for analysis of the environmental transport of radionuclides than deterministic calculations in which the variation of parameters is not taken into consideration. Moreover the stochastic model permits an estimate of the variation of the physico-chemical behaviour of radionuclides in the environment in a more realistic way than by using only the highest transfer coefficients in deterministic approaches, which can lead to non-realistic overestimates of the probability with which high activity levels will be encountered.
Crystal plasticity study of monocrystalline stochastic honeycombs under in-plane compression
International Nuclear Information System (INIS)
Ma, Duancheng; Eisenlohr, Philip; Epler, Eike; Volkert, Cynthia A.; Shanthraj, Pratheek; Diehl, Martin; Roters, Franz; Raabe, Dierk
2016-01-01
We present a study on the plastic deformation of single crystalline stochastic honeycombs under in-plane compression using a crystal plasticity constitutive description for face-centered cubic (fcc) materials, focusing on the very early stage of plastic deformation, and identifying the interplay between the crystallographic orientation and the cellular structure during plastic deformation. We observe that despite the stochastic structure, surprisingly, the slip system activations in the honeycombs are almost identical to their corresponding bulk single crystals at the early stage of the plastic deformation. On the other hand, however, the yield stresses of the honeycombs are nearly independent of their crystallographic orientations. Similar mechanical response is found in compression testing of nanoporous gold micro-pillars aligned with various crystallographic orientations. The macroscopic stress tensors of the honeycombs show the same anisotropy as their respective bulk single crystals. Locally, however, there is an appreciable fluctuation in the local stresses, which are even larger than for polycrystals. This explains why the Taylor/Schmid factor associated with the crystallographic orientation is less useful to estimate the yield stresses of the honeycombs than the bulk single crystals and polycrystals, and why the plastic deformation occurs at smaller strains in the honeycombs than their corresponding bulk single crystals. Besides these findings, the observations of the crystallographic reorientation suggest that conventional orientation analysis tools, such as inverse pole figure and related tools, would in general fail to study the plastic deformation mechanism of monocrystalline cellular materials.
Experimental study of proton stochastic cooling in the NAP-M
International Nuclear Information System (INIS)
Dement'ev, E.N.; Zinevich, N.I.; Medvedko, A.S.; Parkhomchuk, V.V.; Pestrikov, D.V.
1983-01-01
Experimental results on stochastic cooling of a proton beam in the NAP-M are presented. The estimation of the possibility or the cooling method usage in antiproton accumulator rings and also for the study of the cooling peculiarities is the aim of the experiments. Two systems for stochastic cooling have been studied: the wide-band width one and the system with a resonance filter at the input. The experiments are conducted at the energy of 62 MeV. The experiments conducted have shown the possibility of antiproton accumulation. Thermal noises of the feedback system limit the cooling time to approximately 150 s for the single channel system. To attain the cooling time of approximately 1s about one hundred systems operating in parallel connection is required. Mutual effect of particles and coherent instabilities limit the maximum intensity of the particle beam cooled during approximately 1s with the value of approximately 10 7 particles at technically attainable values of the frequency bandwidth
Energy Technology Data Exchange (ETDEWEB)
Zhang, J.L.; Ponnambalam, K. [Waterloo Univ., ON (Canada). Dept. of Systems Design Engineering
2005-08-01
A study was conducted to address some of the multi-reservoir operational problems associated with hydropower generation. Inflow, release, spill and storage are some of the large scale, nonlinear and stochastic problems that can be solved using the Fletcher Ponnambalam (FP) model for risk management in hydropower systems under deregulated energy markets. The main objective is to maximize benefits and minimize the total cost while satisfying the system constraints. The FP model was developed for the first and second order of storage state distributions in terms of inflow distribution. The FP method is suitable for multi-reservoir problems because it offers statistical information on the nature of random behaviour of the system state variables without discretization. It is a cost-effective method because it avoids a scenario-based optimization. In this study, price uncertainty was introduced into the model along with inflow uncertainty. The FP model and the Bender's Decomposition method were applied to the Lake Nipigon reservoir system. The FP results were compared with the stochastic dual dynamic programming. Results show that the FP method achieves optimum operations, including risk minimization. However, sensitivity analysis must always be carried out because the FP model is sensitive to initial values. 10 refs., 1 tab., 8 figs., 1 appendix.
Precharattana, Monamorn; Nokkeaw, Arthorn; Triampo, Wannapong; Triampo, Darapond; Lenbury, Yongwimon
2011-07-01
Acquired Immunodeficiency Syndrome (AIDS) is responsible for millions of deaths worldwide. To date, many drug treatment regimens have been applied to AIDS patients but none has resulted in a successful cure. This is mainly due to the fact that free HIV particles are frequently in mutation, and infected CD4(+) T cells normally reside in the lymphoid tissue where they cannot (so far) be eradicated. We present a stochastic cellular automaton (CA) model to computationally study what could be an alternative treatment, namely Leukapheresis (LCAP), to remove HIV infected leukocytes in the lymphoid tissue. We base our investigations on Monte Carlo computer simulations. Our major objective is to investigate how the number of infected CD4(+) T cells changes in response to LCAP during the short-time (weeks) and long-time (years) scales of HIV/AIDS progression in an infected individual. To achieve our goal, we analyze the time evolution of the CD4(+) T cell population in the lymphoid tissue (i.e., the lymph node) for HIV dynamics in treatment situations with various starting times and frequencies and under a no treatment condition. Our findings suggest that the effectiveness of the treatment depends mainly on the treatment starting time and the frequency of the LCAP. Other factors (e.g., the removal proportion, the treatment duration, and the state of removed cells) that likely influence disease progression are subjects for further investigation. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Buck, J.; Young, D.
2007-01-01
The achievement of energy efficiency in commercial buildings is a function of the activities undertaken, the technology in place, and the extent to which those technologies are used efficiently. We study the factors that affect efficient energy use in the Canadian commercial sector by applying a stochastic frontier approach to a cross-section of Canadian commercial buildings included in the Commercial and Institutional Building Energy Use Survey (CIBEUS). Structural and climate-control features of the buildings as well as climatic conditions are assumed to determine the location of the frontier, while management-related variables including such factors as ownership type and activities govern whether or not the maximally attainable efficiency along the frontier is achieved. Our results indicate that although, on average, buildings appear to be fairly efficient, certain types of operations are more likely than others to exhibit energy efficiencies that are significantly worse than average. These results, along with those related to the effects of physical characteristics on the stochastic efficiency frontier, suggest that there is scope for focused policy initiatives to increase energy efficiency in this sector
Stochastic Generalized Method of Moments
Yin, Guosheng; Ma, Yanyuan; Liang, Faming; Yuan, Ying
2011-01-01
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Stochastic Generalized Method of Moments
Yin, Guosheng
2011-08-16
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Stochastic ground motion simulation
Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan
2014-01-01
Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Stochastic Analysis with Financial Applications
Kohatsu-Higa, Arturo; Sheu, Shuenn-Jyi
2011-01-01
Stochastic analysis has a variety of applications to biological systems as well as physical and engineering problems, and its applications to finance and insurance have bloomed exponentially in recent times. The goal of this book is to present a broad overview of the range of applications of stochastic analysis and some of its recent theoretical developments. This includes numerical simulation, error analysis, parameter estimation, as well as control and robustness properties for stochastic equations. This book also covers the areas of backward stochastic differential equations via the (non-li
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Duane, S.; Kogut, J.B.
1986-01-01
The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)
Selroos, J. O.; Appleyard, P.; Bym, T.; Follin, S.; Hartley, L.; Joyce, S.; Munier, R.
2015-12-01
In 2011 the Swedish Nuclear Fuel and Waste Management Company (SKB) applied for a license to start construction of a final repository for spent nuclear fuel at Forsmark in Northern Uppland, Sweden. The repository is to be built at approximately 500 m depth in crystalline rock. A stochastic, discrete fracture network (DFN) concept was chosen for interpreting the surface-based (incl. boreholes) data, and for assessing the safety of the repository in terms of groundwater flow and flow pathways to and from the repository. Once repository construction starts, also underground data such as tunnel pilot borehole and tunnel trace data will become available. It is deemed crucial that DFN models developed at this stage honors the mapped structures both in terms of location and geometry, and in terms of flow characteristics. The originally fully stochastic models will thus increase determinism towards the repository. Applying the adopted probabilistic framework, predictive modeling to support acceptance criteria for layout and disposal can be performed with the goal of minimizing risks associated with the repository. This presentation describes and illustrates various methodologies that have been developed to condition stochastic realizations of fracture networks around underground openings using borehole and tunnel trace data, as well as using hydraulic measurements of inflows or hydraulic interference tests. The methodologies, implemented in the numerical simulators ConnectFlow and FracMan/MAFIC, are described in some detail, and verification tests and realistic example cases are shown. Specifically, geometric and hydraulic data are obtained from numerical synthetic realities approximating Forsmark conditions, and are used to test the constraining power of the developed methodologies by conditioning unconditional DFN simulations following the same underlying fracture network statistics. Various metrics are developed to assess how well the conditional simulations compare to
Energy Technology Data Exchange (ETDEWEB)
Saldanha Filho, Paulo Carlos
1998-02-01
Stochastic simulation has been employed in petroleum reservoir characterization as a modeling tool able to reconcile information from several different sources. It has the ability to preserve the variability of the modeled phenomena and permits transference of geological knowledge to numerical models of flux, whose predictions on reservoir constitute the main basis for reservoir management decisions. Several stochastic models have been used and/or suggested, depending on the nature of the phenomena to be described. Markov Random Fields (MRFs) appear as an alternative for the modeling of discrete variables, mainly reservoirs with mosaic architecture of facies. In this dissertation, the reader is introduced to the stochastic modeling by MRFs in a generic sense. The main aspects of the technique are reviewed. MRF Conceptual Background is described: its characterization through the Markovian property and the equivalence to Gibbs distributions. The framework for generic modeling of MRFs is described. The classical models of Ising and Potts-Strauss are specific in this context and are related to models of Ising and Potts-Strauss are specific in this context and are related to models used in petroleum reservoir characterization. The problem of parameter estimation is discussed. The maximum pseudolikelihood estimators for some models are presented. Estimators for two models useful for reservoir characterization are developed, and represent a new contribution to the subject. Five algorithms for the Conditional Simulation of MRFs are described: the Metropolis algorithm, the algorithm of German and German (Gibbs sampler), the algorithm of Swendsen-Wang, the algorithm of Wolff, and the algorithm of Flinn. Finally, examples of simulation for some of the models discussed are presented, along with their implications on the modelling of petroleum reservoirs. (author)
Dynamic and stochastic multi-project planning
Melchiors, Philipp
2015-01-01
This book deals with dynamic and stochastic methods for multi-project planning. Based on the idea of using queueing networks for the analysis of dynamic-stochastic multi-project environments this book addresses two problems: detailed scheduling of project activities, and integrated order acceptance and capacity planning. In an extensive simulation study, the book thoroughly investigates existing scheduling policies. To obtain optimal and near optimal scheduling policies new models and algorithms are proposed based on the theory of Markov decision processes and Approximate Dynamic programming.
Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.
2014-11-01
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future
International Nuclear Information System (INIS)
Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.
2014-01-01
In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of
Stochastic diffusion models for substitutable technological innovations
Wang, L.; Hu, B.; Yu, X.
2004-01-01
Based on the analysis of firms' stochastic adoption behaviour, this paper first points out the necessity to build more practical stochastic models. And then, stochastic evolutionary models are built for substitutable innovation diffusion system. Finally, through the computer simulation of the
Stochastic models to study the impact of mixing on a fed-batch culture of Saccharomyces cerevisiae.
Delvigne, F; Lejeune, A; Destain, J; Thonart, P
2006-01-01
The mechanisms of interaction between microorganisms and their environment in a stirred bioreactor can be modeled by a stochastic approach. The procedure comprises two submodels: a classical stochastic model for the microbial cell circulation and a Markov chain model for the concentration gradient calculus. The advantage lies in the fact that the core of each submodel, i.e., the transition matrix (which contains the probabilities to shift from a perfectly mixed compartment to another in the bioreactor representation), is identical for the two cases. That means that both the particle circulation and fluid mixing process can be analyzed by use of the same modeling basis. This assumption has been validated by performing inert tracer (NaCl) and stained yeast cells dispersion experiments that have shown good agreement with simulation results. The stochastic model has been used to define a characteristic concentration profile experienced by the microorganisms during a fermentation test performed in a scale-down reactor. The concentration profiles obtained in this way can explain the scale-down effect in the case of a Saccharomyces cerevisiae fed-batch process. The simulation results are analyzed in order to give some explanations about the effect of the substrate fluctuation dynamics on S. cerevisiae.
Nonperturbative renormalization group study of the stochastic Navier-Stokes equation.
Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo
2012-07-01
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4-2ε of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's -5/3 law is, thus, recovered for ε = 2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the -5/3 law emerges in the presence of a saturation in the ε dependence of the scaling dimension of the eddy diffusivity at ε = 3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic
Parzen, Emanuel
1962-01-01
Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine
Economic consequences of paratuberculosis control in dairy cattle: A stochastic modeling study.
Smith, R L; Al-Mamun, M A; Gröhn, Y T
2017-03-01
The cost of paratuberculosis to dairy herds, through decreased milk production, early culling, and poor reproductive performance, has been well-studied. The benefit of control programs, however, has been debated. A recent stochastic compartmental model for paratuberculosis transmission in US dairy herds was modified to predict herd net present value (NPV) over 25 years in herds of 100 and 1000 dairy cattle with endemic paratuberculosis at initial prevalence of 10% and 20%. Control programs were designed by combining 5 tests (none, fecal culture, ELISA, PCR, or calf testing), 3 test-related culling strategies (all test-positive, high-positive, or repeated positive), 2 test frequencies (annual and biannual), 3 hygiene levels (standard, moderate, or improved), and 2 cessation decisions (testing ceased after 5 negative whole-herd tests or testing continued). Stochastic dominance was determined for each herd scenario; no control program was fully dominant for maximizing herd NPV in any scenario. Use of the ELISA test was generally preferred in all scenarios, but no paratuberculosis control was highly preferred for the small herd with 10% initial prevalence and was frequently preferred in other herd scenarios. Based on their effect on paratuberculosis alone, hygiene improvements were not found to be as cost-effective as test-and-cull strategies in most circumstances. Global sensitivity analysis found that economic parameters, such as the price of milk, had more influence on NPV than control program-related parameters. We conclude that paratuberculosis control can be cost effective, and multiple control programs can be applied for equivalent economic results. Copyright © 2017 Elsevier B.V. All rights reserved.
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture
Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong
The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.
International Nuclear Information System (INIS)
Klauder, J.R.
1983-01-01
The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.
Stabilization and destabilization effects of the electric field on stochastic precipitate pattern
Lagzi, István; Izsak, F.
2004-01-01
Stabilization and destabilization effects of an applied electric field on the Liesegang pattern formation in low concentration gradient were studied with numerical model simulations. In the absence of an electric field pattern formation exhibits increasingly stochastic behaviour as the initial
Plante, I; Wu, H
2014-01-01
The code RITRACKS (Relativistic Ion Tracks) has been developed over the last few years at the NASA Johnson Space Center to simulate the effects of ionizing radiations at the microscopic scale, to understand the effects of space radiation at the biological level. The fundamental part of this code is the stochastic simulation of radiation track structure of heavy ions, an important component of space radiations. The code can calculate many relevant quantities such as the radial dose, voxel dose, and may also be used to calculate the dose in spherical and cylindrical targets of various sizes. Recently, we have incorporated DNA structure and damage simulations at the molecular scale in RITRACKS. The direct effect of radiations is simulated by introducing a slight modification of the existing particle transport algorithms, using the Binary-Encounter-Bethe model of ionization cross sections for each molecular orbitals of DNA. The simulation of radiation chemistry is done by a step-by-step diffusion-reaction program based on the Green's functions of the diffusion equation]. This approach is also used to simulate the indirect effect of ionizing radiation on DNA. The software can be installed independently on PC and tablets using the Windows operating system and does not require any coding from the user. It includes a Graphic User Interface (GUI) and a 3D OpenGL visualization interface. The calculations are executed simultaneously (in parallel) on multiple CPUs. The main features of the software will be presented.
Studies to the stochastic theory of coupled reactorkinetic-thermohydraulic systems Pt. 2
International Nuclear Information System (INIS)
Mesko, L.
1983-06-01
The description is given of the noise phenomena taking place in a multivariable coupled system by a comprehensive model based on the theory of stochastic fluctuations. A comparison is made with models using transfer function formalism for systems characterized by deterministic open and closed loop signal transmission properties. The advantages of the stochastic model are illustrated by simple reactor dynamical examples having diagnostical importance. (author)
Stochastic resonance in models of neuronal ensembles
International Nuclear Information System (INIS)
Chialvo, D.R.; Longtin, A.; Mueller-Gerkin, J.
1997-01-01
Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the open-quotes resonance curveclose quotes in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. copyright 1997 The American Physical Society
Memristors Empower Spiking Neurons With Stochasticity
Al-Shedivat, Maruan
2015-06-01
Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.
Introduction to stochastic calculus
Karandikar, Rajeeva L
2018-01-01
This book sheds new light on stochastic calculus, the branch of mathematics that is most widely applied in financial engineering and mathematical finance. The first book to introduce pathwise formulae for the stochastic integral, it provides a simple but rigorous treatment of the subject, including a range of advanced topics. The book discusses in-depth topics such as quadratic variation, Ito formula, and Emery topology. The authors briefly address continuous semi-martingales to obtain growth estimates and study solution of a stochastic differential equation (SDE) by using the technique of random time change. Later, by using Metivier–Pellumail inequality, the solutions to SDEs driven by general semi-martingales are discussed. The connection of the theory with mathematical finance is briefly discussed and the book has extensive treatment on the representation of martingales as stochastic integrals and a second fundamental theorem of asset pricing. Intended for undergraduate- and beginning graduate-level stud...
STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...
African Journals Online (AJOL)
eobe
STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.
International Nuclear Information System (INIS)
Sadeghi, Mahmood; Kalantar, Mohsen
2014-01-01
Highlights: • Defining a DG dynamic planning problem. • Applying a new evolutionary algorithm called “CMAES” in planning process. • Considering electricity price and fuel price variation stochastic conditions. • Scenario generation and reduction with MCS and backward reduction programs. • Considering approximately all of the costs of the distribution system. - Abstract: This paper presents a dynamic DG planning problem considering uncertainties related to the intermittent nature of the DG technologies such as wind turbines and solar units in addition to the stochastic economic conditions. The stochastic economic situation includes the uncertainties related to the fuel and electricity price of each year. The Monte Carlo simulation is used to generate the possible scenarios of uncertain situations and the produced scenarios are reduced through backward reduction program. The aim of this paper is to maximize the revenue of the distribution system through the benefit cost analysis alongside the encouraging and punishment functions. In order to close to reality, the different growth rates for the planning period are selected. In this paper the Covariance Matrix Adaptation Evolutionary Strategy is introduced and is used to find the best planning scheme of the DG units. The different DG types are considered in the planning problem. The main assumption of this paper is that the DISCO is the owner of the distribution system and the DG units. The proposed method is tested on a 9 bus test distribution system and the results are compared with the known genetic algorithm and PSO methods to show the applicability of the CMAES method in this problem
Peng, Chi; Wang, Meie; Chen, Weiping
2016-09-01
A pollutant accumulation model (PAM) based on the mass balance theory was developed to simulate long-term changes of heavy metal concentrations in soil. When combined with Monte Carlo simulation, the model can predict the probability distributions of heavy metals in a soil-water-plant system with fluctuating environmental parameters and inputs from multiple pathways. The model was used for evaluating different remediation measures to deal with Cd contamination of paddy soils in Youxian county (Hunan province), China, under five scenarios, namely the default scenario (A), not returning paddy straw to the soil (B), reducing the deposition of Cd (C), liming (D), and integrating several remediation measures (E). The model predicted that the Cd contents of soil can lowered significantly by (B) and those of the plants by (D). However, in the long run, (D) will increase soil Cd. The concentrations of Cd in both soils and rice grains can be effectively reduced by (E), although it will take decades of effort. The history of Cd pollution and the major causes of Cd accumulation in soil were studied by means of sensitivity analysis and retrospective simulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Slavko Rogan
2013-07-01
Full Text Available This study assessed the feasibility of stochastic resonance whole-body vibration (SR-WBV training and its impact on isometric maximal voluntary contraction (IMVC, isometric rate of force development (IRFD and a drop jump test (DJ in healthy female students. Twelve participants were randomised to static squats during SR-WBV 6 Hz, noise level 4, over 4 weeks or to a control group (no training. Feasibility outcomes included the number of students agreeing to participate, the number of drop-outs, the adherence to the SR-WBV and the evaluation of the protocol. Secondary outcomes were IMVC, IRFD and DJ. Results: Among 35 eligible students, 12 agreed to participate and two dropped out. The adherence was 41 of 60 possible sessions. There were moderate to large, but statistically non-significant, gains in the secondary outcomes. Conclusion: These results suggest that such a study would be feasible although with some modifications such as a better familiarisation to the DJ.
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Chang, Mou-Hsiung
2015-01-01
The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...
A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2018-05-01
In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.
Experimental study on the proton stochastic cooling in the NAP-M
International Nuclear Information System (INIS)
Dement'ev, E.N.; Zinevich, N.I.; Medvedko, A.S.; Parkhomchuk, V.V.; Pestrikov, D.V.
1982-01-01
Results of experiments on stochastic cooling of proton beam energy spread at NAP-M storage ring are given. Dependences of dampling decrements on beam phase density, numbers of working harmonics and values of amplification factor of feedback circuit have been studied. A differential sensor made in the form of two end-disengaged strip lines is used as a signal source for the feedback circuit. Two cooling systems were investigated: a wide-band system consisting of a sensor and correcting element and a system with a resonance filter at the input. The correcting element is made in the form of four 50 Ohm consistent strip lines. Coaxial cable sections forming with sensor strip lines two resonance lines end-closed with low input resistances of amplifiers were used as a filter. Stable spread in the beam was determined with electronics hums. Coherent beam stability related to its shift in measuring pick up electrode is detected. Method limitations due to electronics noise and collective effects in intense beams are discussed. Cooling time of low-intense particle beam equal to 150 s when decreasing spread from 3x10 - 4 to 2x10 - 4 has been determined
Directory of Open Access Journals (Sweden)
Krisztian Magori
2009-09-01
Full Text Available Dengue is the most important mosquito-borne viral disease affecting humans. The only prevention measure currently available is the control of its vectors, primarily Aedes aegypti. Recent advances in genetic engineering have opened the possibility for a new range of control strategies based on genetically modified mosquitoes. Assessing the potential efficacy of genetic (and conventional strategies requires the availability of modeling tools that accurately describe the dynamics and genetics of Ae. aegypti populations.We describe in this paper a new modeling tool of Ae. aegypti population dynamics and genetics named Skeeter Buster. This model operates at the scale of individual water-filled containers for immature stages and individual properties (houses for adults. The biology of cohorts of mosquitoes is modeled based on the algorithms used in the non-spatial Container Inhabiting Mosquitoes Simulation Model (CIMSiM. Additional features incorporated into Skeeter Buster include stochasticity, spatial structure and detailed population genetics. We observe that the stochastic modeling of individual containers in Skeeter Buster is associated with a strongly reduced temporal variation in stage-specific population densities. We show that heterogeneity in container composition of individual properties has a major impact on spatial heterogeneity in population density between properties. We detail how adult dispersal reduces this spatial heterogeneity. Finally, we present the predicted genetic structure of the population by calculating F(ST values and isolation by distance patterns, and examine the effects of adult dispersal and container movement between properties.We demonstrate that the incorporated stochasticity and level of spatial detail have major impacts on the simulated population dynamics, which could potentially impact predictions in terms of control measures. The capacity to describe population genetics confers the ability to model the outcome
Study on impurity screening in stochastic magnetic boundary of the Large Helical Device
International Nuclear Information System (INIS)
Kobayashi, M.; Morita, S.; Feng, Y.
2008-10-01
The impurity transport characteristics in the scrape-off layer associated with a stochastic magnetic boundary of LHD are analyzed. The remnant islands with very small internal field line pitch in the stochastic region play a key role in reducing the impurity influx. The thermal force driven impurity influx is significantly suppressed when the perpendicular energy flux exceeds the parallel one inside the islands due to the small pitch. Application of the 3D edge transport code, EMC3-EIRENE, confirmed the impurity retention (screening) effect in the edge region. It is also found that the edge surface layers are the most effective region to retain (screen) impurities because of the flow acceleration and plasma cooling via short flux tubes. The carbon emission obtained in experiments is in good agreement with the modelling results, showing the impurity retention (screening) potential of the stochastic magnetic boundary. (author)
Chernobyl reactor transient simulation study
International Nuclear Information System (INIS)
Gaber, F.A.; El Messiry, A.M.
1988-01-01
This paper deals with the Chernobyl nuclear power station transient simulation study. The Chernobyl (RBMK) reactor is a graphite moderated pressure tube type reactor. It is cooled by circulating light water that boils in the upper parts of vertical pressure tubes to produce steam. At equilibrium fuel irradiation, the RBMK reactor has a positive void reactivity coefficient. However, the fuel temperature coefficient is negative and the net effect of a power change depends upon the power level. Under normal operating conditions the net effect (power coefficient) is negative at full power and becomes positive under certain transient conditions. A series of dynamic performance transient analysis for RBMK reactor, pressurized water reactor (PWR) and fast breeder reactor (FBR) have been performed using digital simulator codes, the purpose of this transient study is to show that an accident of Chernobyl's severity does not occur in PWR or FBR nuclear power reactors. This appears from the study of the inherent, stability of RBMK, PWR and FBR under certain transient conditions. This inherent stability is related to the effect of the feed back reactivity. The power distribution stability in the graphite RBMK reactor is difficult to maintain throughout its entire life, so the reactor has an inherent instability. PWR has larger negative temperature coefficient of reactivity, therefore, the PWR by itself has a large amount of natural stability, so PWR is inherently safe. FBR has positive sodium expansion coefficient, therefore it has insufficient stability it has been concluded that PWR has safe operation than FBR and RBMK reactors
A Study of Stochastic Resonance in the Periodically Forced Rikitake Dynamo
Directory of Open Access Journals (Sweden)
Chien-Chih Chen Chih-Yuan Tseng
2007-01-01
Full Text Available The geodynamo has widely been thought to be an intuitive and selfsustained model of the Earth¡¦s magnetic field. In this paper, we elucidate how a periodic signal could be embedded in the geomagnetic filed via the mechanism of stochastic resonance in a forced Rikitake dynamo. Based on the stochastic resonance observed in the periodically forced Rikitake dynamo, we thus suggest a common triggering for geomagnetic reversal and glacial events. Both kinds of catastrophes may result from the cyclic variation of the Earth¡¦s orbital eccentricity.
International Nuclear Information System (INIS)
Braumann, Andreas; Kraft, Markus; Wagner, Wolfgang
2010-01-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determines the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.
Directory of Open Access Journals (Sweden)
J. Rinne
2012-06-01
Full Text Available In the analyses of VOC fluxes measured above plant canopies, one usually assumes the flux above canopy to equal the exchange at the surface. Thus one assumes the chemical degradation to be much slower than the turbulent transport. We used a stochastic Lagrangian transport model in which the chemical degradation was described as first order decay in order to study the effect of the chemical degradation on above canopy fluxes of chemically reactive species. With the model we explored the sensitivity of the ratio of the above canopy flux to the surface emission on several parameters such as chemical lifetime of the compound, friction velocity, stability, and canopy density. Our results show that friction velocity and chemical lifetime affected the loss during transport the most. The canopy density had a significant effect if the chemically reactive compound was emitted from the forest floor. We used the results of the simulations together with oxidant data measured during HUMPPA-COPEC-2010 campaign at a Scots pine site to estimate the effect of the chemistry on fluxes of three typical biogenic VOCs, isoprene, α-pinene, and β-caryophyllene. Of these, the chemical degradation had a major effect on the fluxes of the most reactive species β-caryophyllene, while the fluxes of α-pinene were affected during nighttime. For these two compounds representing the mono- and sesquiterpenes groups, the effect of chemical degradation had also a significant diurnal cycle with the highest chemical loss at night. The different day and night time loss terms need to be accounted for, when measured fluxes of reactive compounds are used to reveal relations between primary emission and environmental parameters.
DEFF Research Database (Denmark)
Frier, Christian; Sørensen, John Dalsgaard
2005-01-01
For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation occurs when the chloride content...... is modeled by a 2-dimensional diffusion process by FEM (Finite Element Method) and the diffusion coefficient, surface chloride concentration and reinforcement cover depth are modeled by multidimensional stochastic fields, which are discretized using the EOLE (Expansion Optimum Linear Estimation) approach....... As an example a bridge pier in a marine environment is considered and the results are given in terms of the distribution of the time for initialization of corrosion...
Simulating transmission and control of Taenia solium infections using a reed-frost stochastic model
DEFF Research Database (Denmark)
Kyvsgaard, Niels Chr.; Johansen, Maria Vang; Carabin, Hélène
2007-01-01
occur between hosts and that hosts can be either susceptible, infected or ‘recovered and presumed immune'. Transmission between humans and pigs is modelled as susceptible roaming pigs scavenging on human faeces infected with T. solium eggs. Transmission from pigs to humans is modelled as susceptible...... humans eating under-cooked pork meat harbouring T. solium metacestodes. Deterministic models of each scenario were first run, followed by stochastic versions of the models to assess the likelihood of infection elimination in the small population modelled. The effects of three groups of interventions were...... investigated using the model: (i) interventions affecting the transmission parameters such as use of latrines, meat inspection, and cooking habits; (ii) routine interventions including rapid detection and treatment of human carriers or pig vaccination; and (iii) treatment interventions of either humans or pigs...
Stochastic Effects in Microstructure
Directory of Open Access Journals (Sweden)
Glicksman M.E.
2002-01-01
Full Text Available We are currently studying microstructural responses to diffusion-limited coarsening in two-phase materials. A mathematical solution to late-stage multiparticle diffusion in finite systems is formulated with account taken of particle-particle interactions and their microstructural correlations, or "locales". The transition from finite system behavior to that for an infinite microstructure is established analytically. Large-scale simulations of late-stage phase coarsening dynamics show increased fluctuations with increasing volume fraction, Vv, of the mean flux entering or leaving particles of a given size class. Fluctuations about the mean flux were found to depend on the scaled particle size, R/, where R is the radius of a particle and is the radius of the dispersoid averaged over the population within the microstructure. Specifically, small (shrinking particles tend to display weak fluctuations about their mean flux, whereas particles of average, or above average size, exhibit strong fluctuations. Remarkably, even in cases of microstructures with a relatively small volume fraction (Vv ~ 10-4, the particle size distribution is broader than that for the well-known Lifshitz-Slyozov limit predicted at zero volume fraction. The simulation results reported here provide some additional surprising insights into the effect of diffusion interactions and stochastic effects during evolution of a microstructure, as it approaches its thermodynamic end-state.
Energy Technology Data Exchange (ETDEWEB)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Isothermal and non-isothermal cure of a tri-functional epoxy resin (TGAP): a stochastic TMDSC study
Hutchinson, John M.; Shiravand, Fatemeh; Calventus Solé, Yolanda; Fraga Rivas, Iria
2012-01-01
The isothermal cure of a highly reactive tri-functional epoxy resin, tri-glycidyl para-amino phenol (TGAP), with diamino diphenyl sulphone (DDS), at two different cure temperatures Tc has been studied by both conventional differential scanning calorimetry (DSC) and by a stochastic temperature modulated DSC technique, TOPEM. From a series of isothermal cure experiments for increasing cure times, the glass transition temperature Tg as a function of isothermal cure time is determined by co...
International Nuclear Information System (INIS)
Perl, J; Villagomez-Bernabe, B; Currell, F
2015-01-01
Purpose: The stochastic nature of the subatomic world presents a challenge for physics education. Even experienced physicists can be amazed at the varied behavior of electrons, x-rays, protons, neutrons, ions and the any short-lived particles that make up the overall behavior of our accelerators, brachytherapy sources and medical imaging systems. The all-particle Monte Carlo particle transport tool, TOPAS Tool for Particle Simulation, originally developed for proton therapy research, has been repurposed into a physics teaching tool, TOPAS-edu. Methods: TOPAS-edu students set up simulated particle sources, collimators, scatterers, imagers and scoring setups by writing simple ASCII files (in the TOPAS Parameter Control System format). Students visualize geometry setups and particle trajectories in a variety of modes from OpenGL graphics to VRML 3D viewers to gif and PostScript image files. Results written to simple comma separated values files are imported by the student into their preferred data analysis tool. Students can vary random seeds or adjust parameters of physics processes to better understand the stochastic nature of subatomic physics. Results: TOPAS-edu has been successfully deployed as the centerpiece of a physics course for master’s students at Queen’s University Belfast. Tutorials developed there takes students through a step by step course on the basics of particle transport and interaction, scattering, Bremsstrahlung, etc. At each step in the course, students build simulated experimental setups and then analyze the simulated results. Lessons build one upon another so that a student might end up with a full simulation of a medical accelerator, a water-phantom or an imager. Conclusion: TOPAS-edu was well received by students. A second application of TOPAS-edu is currently in development at Zurich University of Applied Sciences, Switzerland. It is our eventual goal to make TOPAS-edu available free of charge to any non-profit organization, along with
Energy Technology Data Exchange (ETDEWEB)
Perl, J [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Villagomez-Bernabe, B; Currell, F [Queen’s University Belfast, Belfast, Northern Ireland (United Kingdom)
2015-06-15
Purpose: The stochastic nature of the subatomic world presents a challenge for physics education. Even experienced physicists can be amazed at the varied behavior of electrons, x-rays, protons, neutrons, ions and the any short-lived particles that make up the overall behavior of our accelerators, brachytherapy sources and medical imaging systems. The all-particle Monte Carlo particle transport tool, TOPAS Tool for Particle Simulation, originally developed for proton therapy research, has been repurposed into a physics teaching tool, TOPAS-edu. Methods: TOPAS-edu students set up simulated particle sources, collimators, scatterers, imagers and scoring setups by writing simple ASCII files (in the TOPAS Parameter Control System format). Students visualize geometry setups and particle trajectories in a variety of modes from OpenGL graphics to VRML 3D viewers to gif and PostScript image files. Results written to simple comma separated values files are imported by the student into their preferred data analysis tool. Students can vary random seeds or adjust parameters of physics processes to better understand the stochastic nature of subatomic physics. Results: TOPAS-edu has been successfully deployed as the centerpiece of a physics course for master’s students at Queen’s University Belfast. Tutorials developed there takes students through a step by step course on the basics of particle transport and interaction, scattering, Bremsstrahlung, etc. At each step in the course, students build simulated experimental setups and then analyze the simulated results. Lessons build one upon another so that a student might end up with a full simulation of a medical accelerator, a water-phantom or an imager. Conclusion: TOPAS-edu was well received by students. A second application of TOPAS-edu is currently in development at Zurich University of Applied Sciences, Switzerland. It is our eventual goal to make TOPAS-edu available free of charge to any non-profit organization, along with
The U.S. Environmental Protection Agency has conducted a probabilistic exposure and dose assessment on the arsenic (As) and chromium (Cr) components of Chromated Copper Arsenate (CCA) using the Stochastic Human Exposure and Dose Simulation model for wood preservatives (SHEDS-Wood...
Stochastic ferromagnetism analysis and numerics
Brzezniak, Zdzislaw; Neklyudov, Mikhail; Prohl, Andreas
2013-01-01
This monograph examines magnetization dynamics at elevated temperatures which can be described by the stochastic Landau-Lifshitz-Gilbert equation (SLLG). Comparative computational studies with the stochastic model are included. Constructive tools such as e.g. finite element methods are used to derive the theoretical results, which are then used for computational studies.
International Nuclear Information System (INIS)
Zwickl, Titus; Carleer, Bart; Kubli, Waldemar
2005-01-01
In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology.We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process -- in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings.Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general
Zwickl, Titus; Carleer, Bart; Kubli, Waldemar
2005-08-01
In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.
Energy Technology Data Exchange (ETDEWEB)
Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)
2011-03-15
The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)
Schertzer, D. J. M.; Tchiguirinskaia, I.
2016-12-01
Multifractal fields, whose definition is rather independent of their domain dimension, have opened a new approach of geophysics enabling to explore its spatial extension that is of prime importance as underlined by the expression "spatial chaos". However multifractals have been until recently restricted to be scalar valued, i.e. to one-dimensional codomains. This has prevented to deal with the key question of complex component interactions and their non trivial symmetries. We first emphasize that the Lie algebra of stochastic generators of cascade processes enables us to generalize multifractals to arbitrarily large codomains, e.g. flows of vector fields on large dimensional manifolds. In particular, we have recently investigated the neat example of stable Levy generators on Clifford algebra that have a number of seductive properties, e.g. universal statistical and robust algebra properties, both defining the basic symmetries of the corresponding fields (Schertzer and Tchiguirinskaia, 2015). These properties provide a convenient multifractal framework to study both the symmetries of the fields and how they stochastically break the symmetries of the underlying equations due to boundary conditions, large scale rotations and forcings. These developments should help us to answer to challenging questions such as the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013), to fully address the question of the limitations of quasi- geostrophic turbulence (Schertzer et al., 2012) and to explore the peculiar phenomenology of turbulent dynamics of the atmosphere or oceans that is neither two- or three-dimensional. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.8183. Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327336. Schertzer, D. & Tchiguirinskaia, I., 2015. Multifractal vector fields and stochastic Clifford algebra
Stochastic dynamics and irreversibility
Tomé, Tânia
2015-01-01
This textbook presents an exposition of stochastic dynamics and irreversibility. It comprises the principles of probability theory and the stochastic dynamics in continuous spaces, described by Langevin and Fokker-Planck equations, and in discrete spaces, described by Markov chains and master equations. Special concern is given to the study of irreversibility, both in systems that evolve to equilibrium and in nonequilibrium stationary states. Attention is also given to the study of models displaying phase transitions and critical phenomema both in thermodynamic equilibrium and out of equilibrium. These models include the linear Glauber model, the Glauber-Ising model, lattice models with absorbing states such as the contact process and those used in population dynamic and spreading of epidemic, probabilistic cellular automata, reaction-diffusion processes, random sequential adsorption and dynamic percolation. A stochastic approach to chemical reaction is also presented.The textbook is intended for students of ...
Separable quadratic stochastic operators
International Nuclear Information System (INIS)
Rozikov, U.A.; Nazir, S.
2009-04-01
We consider quadratic stochastic operators, which are separable as a product of two linear operators. Depending on properties of these linear operators we classify the set of the separable quadratic stochastic operators: first class of constant operators, second class of linear and third class of nonlinear (separable) quadratic stochastic operators. Since the properties of operators from the first and second classes are well known, we mainly study the properties of the operators of the third class. We describe some Lyapunov functions of the operators and apply them to study ω-limit sets of the trajectories generated by the operators. We also compare our results with known results of the theory of quadratic operators and give some open problems. (author)
Borodin, Andrei N
2017-01-01
This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.
Institute of Scientific and Technical Information of China (English)
徐岩; 胡斌
2012-01-01
The evolution process of partners' strategies in strategic alliances with multi-firm was considered by evolutionary game theory perspective. A deterministic dynamical equation is developed, based on which, the Gaussian White noise is introduced to show the disturbance, and a stochastic dynamical equation is created. The catastrophe of strategic alliances that ranges cooperation to betrayal in the process is analyzed by means of stochastic catastrophe theory. The catastrophe set of control variables is found to explain and forecast the catastrophe of strategic alliances. To validate the correctness of the model, some numerical simulations are given in different scenarios, and it is evident from the illustrations that the behavior of the strategic alliances encounters catastrophe near the catastrophe set.%针对多成员战略联盟在不确定环境下策略的演化过程,借助演化博弈论建立了含有白噪声的随机动力学.利用随机突变理论来分析在不确定性条件下,联盟成员行为(竞争或合作)随着参数的连续变化在整体上发生突变的问题,给出了联盟发生突变的临界集,以此来解释和预测在不确定性环境下,战略联盟发生非计划性解体或者合作失败的突发性问题.对不同场景下的模型进行了数值仿真,结果表明,在临界集附近,联盟集体的行为发生了突变.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stochastic conditional intensity processes
DEFF Research Database (Denmark)
Bauwens, Luc; Hautsch, Nikolaus
2006-01-01
model allows for a wide range of (cross-)autocorrelation structures in multivariate point processes. The model is estimated by simulated maximum likelihood (SML) using the efficient importance sampling (EIS) technique. By modeling price intensities based on NYSE trading, we provide significant evidence......In this article, we introduce the so-called stochastic conditional intensity (SCI) model by extending Russell’s (1999) autoregressive conditional intensity (ACI) model by a latent common dynamic factor that jointly drives the individual intensity components. We show by simulations that the proposed...... for a joint latent factor and show that its inclusion allows for an improved and more parsimonious specification of the multivariate intensity process...
Stochastic approach to microphysics
Energy Technology Data Exchange (ETDEWEB)
Aron, J.C.
1987-01-01
The presently widespread idea of ''vacuum population'', together with the quantum concept of vacuum fluctuations leads to assume a random level below that of matter. This stochastic approach starts by a reminder of the author's previous work, first on the relation of diffusion laws with the foundations of microphysics, and then on hadron spectrum. Following the latter, a random quark model is advanced; it gives to quark pairs properties similar to those of a harmonic oscillator or an elastic string, imagined as an explanation to their asymptotic freedom and their confinement. The stochastic study of such interactions as electron-nucleon, jets in e/sup +/e/sup -/ collisions, or pp -> ..pi../sup 0/ + X, gives form factors closely consistent with experiment. The conclusion is an epistemological comment (complementarity between stochastic and quantum domains, E.P.R. paradox, etc...).
Olea, Ricardo A.; Luppens, James A.
2015-01-01
Coal is a chemically complex commodity that often contains most of the natural elements in the periodic table. Coal constituents are conventionally grouped into four components (proximate analysis): fixed carbon, ash, inherent moisture, and volatile matter. These four parts, customarily measured as weight losses and expressed as percentages, share all properties and statistical challenges of compositional data. Consequently, adequate modeling should be done in terms of a logratio transformation, a requirement that is commonly overlooked by modelers. The transformation of choice is the isometric logratio transformation because of its geometrical and statistical advantages. The modeling is done through a series of realizations prepared by applying sequential simulation for the purpose of displaying the parts in maps incorporating uncertainty. The approach makes realistic assumptions and the results honor the data and basic considerations, such as percentages between 0 and 100, all four parts adding to 100% at any location in the study area, and a style of spatial fluctuation in the realizations equal to that of the data. The realizations are used to prepare different results, including probability distributions across a deposit, E-type maps displaying average properties, and probability maps summarizing joint fluctuations of several parts. Application of these maps to a lignite bed clearly delineates the deposit boundary, reveals a channel cutting across, and shows that the most favorable coal quality is to the north and deteriorates toward the southeast.
Stochastic analysis in production process and ecology under uncertainty
Bieda, Bogusław
2014-01-01
The monograph addresses a problem of stochastic analysis based on the uncertainty assessment by simulation and application of this method in ecology and steel industry under uncertainty. The first chapter defines the Monte Carlo (MC) method and random variables in stochastic models. Chapter two deals with the contamination transport in porous media. Stochastic approach for Municipal Solid Waste transit time contaminants modeling using MC simulation has been worked out. The third chapter describes the risk analysis of the waste to energy facility proposal for Konin city, including the financial aspects. Environmental impact assessment of the ArcelorMittal Steel Power Plant, in Kraków - in the chapter four - is given. Thus, four scenarios of the energy mix production processes were studied. Chapter five contains examples of using ecological Life Cycle Assessment (LCA) - a relatively new method of environmental impact assessment - which help in preparing pro-ecological strategy, and which can lead to reducing t...
Stochastic reaction-diffusion algorithms for macromolecular crowding
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
Model-free stochastic processes studied with q-wavelet-based informational tools
International Nuclear Information System (INIS)
Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.
2007-01-01
We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework
Pang, Kar Mun; Jangi, Mehdi; Bai, X.-S.; Schramm, Jesper; Walther, Jens Honore
2016-01-01
The use of transported Probability Density Function(PDF) methods allows a single model to compute the autoignition, premixed mode and diffusion flame of diesel combustion under engine-like conditions [1,2]. The Lagrangian particle based transported PDF models have been validated across a wide range of conditions [2,3]. Alternatively, the transported PDF model can also be formulated in the Eulerian framework[4]. The Eulerian PDF is commonly known as the Eulerian Stochastic Fields (ESF) model. ...
International Nuclear Information System (INIS)
Zhu, Zhiwen; Zhang, Qingxin; Xu, Jia
2014-01-01
Stochastic bifurcation and fractal and chaos control of a giant magnetostrictive film–shape memory alloy (GMF–SMA) composite cantilever plate subjected to in-plane harmonic and stochastic excitation were studied. Van der Pol items were improved to interpret the hysteretic phenomena of both GMF and SMA, and the nonlinear dynamic model of a GMF–SMA composite cantilever plate subjected to in-plane harmonic and stochastic excitation was developed. The probability density function of the dynamic response of the system was obtained, and the conditions of stochastic Hopf bifurcation were analyzed. The conditions of noise-induced chaotic response were obtained in the stochastic Melnikov integral method, and the fractal boundary of the safe basin of the system was provided. Finally, the chaos control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that stochastic Hopf bifurcation and chaos appear in the parameter variation process. The boundary of the safe basin of the system has fractal characteristics, and its area decreases when the noise intensifies. The system reliability was improved through stochastic optimal control, and the safe basin area of the system increased
Using metrics in stability of stochastic programming problems
Czech Academy of Sciences Publication Activity Database
Houda, Michal
2005-01-01
Roč. 13, č. 1 (2005), s. 128-134 ISSN 0572-3043 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * quantitative stability * Wasserstein metrics * Kolmogorov metrics * simulation study Subject RIV: BB - Applied Statistics, Operational Research
DEFF Research Database (Denmark)
Simonsen, Maria
This thesis treats stochastic systems with switching dynamics. Models with these characteristics are studied from several perspectives. Initially in a simple framework given in the form of stochastic differential equations and, later, in an extended form which fits into the framework of sliding...... mode control. It is investigated how to understand and interpret solutions to models of switched systems, which are exposed to discontinuous dynamics and uncertainties (primarily) in the form of white noise. The goal is to gain knowledge about the performance of the system by interpreting the solution...
International Nuclear Information System (INIS)
Nakayasu, Hidetoshi; Maekawa, Zen'ichiro
1997-01-01
One of the major objectives of this paper is to offer a practical tool for materials design of unidirectional composite laminates under in-plane multiaxial load. Design-oriented failure criteria of composite materials are applied to construct the evaluation model of probabilistic safety based on the extended structural reliability theory. Typical failure criteria such as maximum stress, maximum strain and quadratic polynomial failure criteria are compared from the viewpoint of reliability-oriented materials design of composite materials. The new design diagram which shows the feasible region on in-plane strain space and corresponds to safety index or failure probability is also proposed. These stochastic failure envelope diagrams which are drawn in in-plane strain space enable one to evaluate the stochastic behavior of a composite laminate with any lamination angle under multi-axial stress or strain condition. Numerical analysis for a graphite/epoxy laminate of T300/5208 is shown for the comparative verification of failure criteria under the various combinations of multi-axial load conditions and lamination angles. The stochastic failure envelopes of T300/5208 were also described in in-plane strain space
O'Neill, J. J.; Cai, X.; Kinnersley, R.
2015-12-01
Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.
Assari, Amin; Mohammadi, Zargham
2017-09-01
Karst systems show high spatial variability of hydraulic parameters over small distances and this makes their modeling a difficult task with several uncertainties. Interconnections of fractures have a major role on the transport of groundwater, but many of the stochastic methods in use do not have the capability to reproduce these complex structures. A methodology is presented for the quantification of tortuosity using the single normal equation simulation (SNESIM) algorithm and a groundwater flow model. A training image was produced based on the statistical parameters of fractures and then used in the simulation process. The SNESIM algorithm was used to generate 75 realizations of the four classes of fractures in a karst aquifer in Iran. The results from six dye tracing tests were used to assign hydraulic conductivity values to each class of fractures. In the next step, the MODFLOW-CFP and MODPATH codes were consecutively implemented to compute the groundwater flow paths. The 9,000 flow paths obtained from the MODPATH code were further analyzed to calculate the tortuosity factor. Finally, the hydraulic conductivity values calculated from the dye tracing experiments were refined using the actual flow paths of groundwater. The key outcomes of this research are: (1) a methodology for the quantification of tortuosity; (2) hydraulic conductivities, that are incorrectly estimated (biased low) with empirical equations that assume Darcian (laminar) flow with parallel rather than tortuous streamlines; and (3) an understanding of the scale-dependence and non-normal distributions of tortuosity.
Stochastic modeling and analysis of telecoms networks
Decreusefond, Laurent
2012-01-01
This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an
Dynamical and hamiltonian dilations of stochastic processes
International Nuclear Information System (INIS)
Baumgartner, B.; Gruemm, H.-R.
1982-01-01
This is a study of the problem, which stochastic processes could arise from dynamical systems by loss of information. The notions of ''dilation'' and ''approximate dilation'' of a stochastic process are introduced to give exact definitions of this particular relationship. It is shown that every generalized stochastic process is approximately dilatable by a sequence of dynamical systems, but for stochastic processes in full generality one needs nets. (Author)
Stochastic dynamic modeling of regular and slow earthquakes
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Brownian motion and stochastic calculus
Karatzas, Ioannis
1998-01-01
This book is designed as a text for graduate courses in stochastic processes. It is written for readers familiar with measure-theoretic probability and discrete-time processes who wish to explore stochastic processes in continuous time. The vehicle chosen for this exposition is Brownian motion, which is presented as the canonical example of both a martingale and a Markov process with continuous paths. In this context, the theory of stochastic integration and stochastic calculus is developed. The power of this calculus is illustrated by results concerning representations of martingales and change of measure on Wiener space, and these in turn permit a presentation of recent advances in financial economics (option pricing and consumption/investment optimization). This book contains a detailed discussion of weak and strong solutions of stochastic differential equations and a study of local time for semimartingales, with special emphasis on the theory of Brownian local time. The text is complemented by a large num...
Study of swelling by simulation
International Nuclear Information System (INIS)
Gilbon, D.; Le Naour, L.; Didout, G.
1983-06-01
Fuel cans and hexagonal tubes containing the pins must withstand high irradiation doses (220 or even 275 dpa) with a low swelling. Qualification of a new alloy for claddings requires several years of irradiation on a reactor. For a fast first selection simulation by 1MeV electron or heavy ions enhance radiation damages. Principles of these techniques are recalled and some examples mainly with steel 316 are given. Results are compared with results obtained in reactor to determine simulation limits. The method is not valid in the case of a structural instability of the irradiated material in a reactor [fr
Polyakov, Evgeny A.; Rubtsov, Alexey N.
2018-02-01
When conducting the numerical simulation of quantum transport, the main obstacle is a rapid growth of the dimension of entangled Hilbert subspace. The Quantum Monte Carlo simulation techniques, while being capable of treating the problems of high dimension, are hindered by the so-called "sign problem". In the quantum transport, we have fundamental asymmetry between the processes of emission and absorption of environment excitations: the emitted excitations are rapidly and irreversibly scattered away. Whereas only a small part of these excitations is absorbed back by the open subsystem, thus exercising the non-Markovian self-action of the subsystem onto itself. We were able to devise a method for the exact simulation of the dominant quantum emission processes, while taking into account the small backaction effects in an approximate self-consistent way. Such an approach allows us to efficiently conduct simulations of real-time dynamics of small quantum subsystems immersed in non-Markovian bath for large times, reaching the quasistationary regime. As an example we calculate the spatial quench dynamics of Kondo cloud for a bozonized Kodno impurity model.
Stochastic differential equations and a biological system
DEFF Research Database (Denmark)
Wang, Chunyan
1994-01-01
The purpose of this Ph.D. study is to explore the property of a growth process. The study includes solving and simulating of the growth process which is described in terms of stochastic differential equations. The identification of the growth and variability parameters of the process based...... on experimental data is considered. As an example, the growth of bacteria Pseudomonas fluorescens is taken. Due to the specific features of stochastic differential equations, namely that their solutions do not exist in the general sense, two new integrals - the Ito integral and the Stratonovich integral - have...... description. In order to identify the parameters, a Maximum likelihood estimation method is used together with a simplified truncated second order filter. Because of the continuity feature of the predictor equation, two numerical integration methods, called the Odeint and the Discretization method...
International Nuclear Information System (INIS)
Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.
1975-01-01
A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)
Directory of Open Access Journals (Sweden)
Romanu Ekaterini
2006-01-01
Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.
International Nuclear Information System (INIS)
Singh, Kamal P.; Ropars, Guy; Brunel, Marc; Le Floch, Albert
2006-01-01
We investigate the two-dimensional optical rotor of a weakly modulated vectorial bistable laser submitted to a single or multiple stochastic perturbations. In the Langevin-type equation of the rotor the role of an even or odd input forcing function on the system dynamics is isolated. Through these two inputs of optical and magnetic natures we verify that the stochastic resonance exists only when the periodic modulation acts on the even parity optical input. When two mutually correlated noises are simultaneously submitted to the input functions of opposite parities, we find a critical regime of the noise interplay whereby one stable state becomes noise-free. In this case, the residence time of the light vector in the noise-free state diverges which leads to a collapse of the output signal-to-noise ratio. But, in this critical regime also obtained when one noise drives both the even and odd functions, if the system symmetry is broken through an independent lever control, we can recover the switching cycle due to a new response mechanism, namely, the dual stochastic response, with a specific output signal-to-noise ratio expression. Both the theoretical analysis and the experiment show that the signal-to-noise ratio now displays a robust behavior for a large range of the input noise amplitude, and a plateau with respect to the input signal amplitude. Furthermore, we isolate an original signature of this synchronization mechanism in the residence-time distribution leading to a broadband forcing frequency range. These noise interplay effects in a double well potential are of generic nature and could be found in other nonlinear systems
Stochastic structure of annual discharges of large European rivers
Directory of Open Access Journals (Sweden)
Stojković Milan
2015-03-01
Full Text Available Water resource has become a guarantee for sustainable development on both local and global scales. Exploiting water resources involves development of hydrological models for water management planning. In this paper we present a new stochastic model for generation of mean annul flows. The model is based on historical characteristics of time series of annual flows and consists of the trend component, long-term periodic component and stochastic component. The rest of specified components are model errors which are represented as a random time series. The random time series is generated by the single bootstrap model (SBM. Stochastic ensemble of error terms at the single hydrological station is formed using the SBM method. The ultimate stochastic model gives solutions of annual flows and presents a useful tool for integrated river basin planning and water management studies. The model is applied for ten large European rivers with long observed period. Validation of model results suggests that the stochastic flows simulated by the model can be used for hydrological simulations in river basins.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
Energy Technology Data Exchange (ETDEWEB)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
International Nuclear Information System (INIS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-01-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio
Geometric integrators for stochastic rigid body dynamics
Tretyakov, Mikhail
2016-01-05
Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.
Geometric integrators for stochastic rigid body dynamics
Tretyakov, Mikhail
2016-01-01
Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.
OpenSimulator Interoperability with DRDC Simulation Tools: Compatibility Study
2014-09-01
user account information, user assets, avatar configuration). In standalone mode, SQLite is the default database. In grid mode, MySQL is...stochastic model. A “Model Bridge” application was developed in Java to allow IPME to poll the outputs from FSSIM. Communication was not performed in
An improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions
Kraft, M
2003-01-01
We propose an improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions. By combining forward and reverse reaction rates, a significant gain in computational efficiency is achieved. Two modifications of modelling the temperature dependence (with and without conservation of enthalpy) are introduced and studied quantitatively. The algorithm is tested for the combustion of n-heptane, which is a reference fuel component for internal combustion engines. The convergence of the algorithm is studied by a series of numerical experiments and the computational cost of the stochastic algorithm is compared with the DAE code DASSL. If less accuracy is needed the stochastic algorithm is faster on short simulation time intervals. The new stochastic algorithm is significantly faster than the original direct simulation algorithm in all cases considered.
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The
Castle, L; Hart, A; Holmes, M J; Oldring, P K T
2006-05-01
A two-dimensional probabilistic model was constructed to estimate the short-term dietary exposure of UK consumers to any generalized migrant from coated light metal food packaging. Using three UK National Dietary and Nutrition Surveys (NDNS) comprising 4-7-day dietary surveys for different age and gender groups, actual body weights and survey years, a sample representative of the dietary consumption of the UK population was obtained comprising around 4,200 food items. Interrogation of the raw data showed that the per capita consumption of food and beverage for an adult was 2.9 kg per person day(-1), which is comparable with the US FDA value of 3.0 kg. The packaging type of each food item was assigned from the survey descriptions or by sampling from distributions based upon market share information and expert judgement. Each food item was assigned to the relevant food simulant: A (aqueous), B (acidic) or D (fatty), so that simulant migration data could be used. The exposure model was used to evaluate exposure for a given level of migration and, conversely, the level of migration that could be tolerated whilst keeping within a target threshold exposure level. As examples, migration at 10 microg dm(-2) into fatty foods only resulted in an exposure ranging from 0.06 to 0.22 microg kg(-1) body (actual) weight day(-1) depending on the scenario. The model revealed that if migration from metal coatings was only into fatty foods, migration in the range 1.83-4.95 microg dm(2) (97.5th percentile, depending on the scenario) would give an exposure of less than 1.5 microg per person day(-1). This is a toxicological threshold limit used in the USA. If migration into simulants A and B is also considered to be at the same level as that for simulant D, then the level of migration for the threshold to be reached is, not surprisingly, lower (0.64-0.87 microg dm(-2)) than that if migration were only into fatty foods. In this case, clearly the main contributors to the exposure were
Stochastic theory of interfacial enzyme kinetics: A kinetic Monte Carlo study
International Nuclear Information System (INIS)
Das, Biswajit; Gangopadhyay, Gautam
2012-01-01
Graphical abstract: Stochastic theory of interfacial enzyme kinetics is formulated. Numerical results of macroscopic phenomenon of lag-burst kinetics is obtained by using a kinetic Monte Carlo approach to single enzyme activity. Highlights: ► An enzyme is attached with the fluid state phospholipid molecules on the Langmuir monolayer. ► Through the diffusion, the enzyme molecule reaches the gel–fluid interface. ► After hydrolysing a phospholipid molecule it predominantly leaves the surface in the lag phase. ► The enzyme is strictly attached to the surface with scooting mode of motion and the burst phase appears. - Abstract: In the spirit of Gillespie’s stochastic approach we have formulated a theory to explore the advancement of the interfacial enzyme kinetics at the single enzyme level which is ultimately utilized to obtain the ensemble average macroscopic feature, lag-burst kinetics. We have provided a theory of the transition from the lag phase to the burst phase kinetics by considering the gradual development of electrostatic interaction among the positively charged enzyme and negatively charged product molecules deposited on the phospholipid surface. It is shown that the different diffusion time scales of the enzyme over the fluid and product regions are responsible for the memory effect in the correlation of successive turnover events of the hopping mode in the single trajectory analysis which again is reflected on the non-Gaussian distribution of turnover times on the macroscopic kinetics in the lag phase unlike the burst phase kinetics.
Analytical study on the criticality of the stochastic optimal velocity model
International Nuclear Information System (INIS)
Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji
2006-01-01
In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them