WorldWideScience

Sample records for efficient stochastic simulations

  1. HRSSA – Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    International Nuclear Information System (INIS)

    Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong

    2016-01-01

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.

  2. HRSSA – Efficient hybrid stochastic simulation for spatially homogeneous biochemical reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Marchetti, Luca, E-mail: marchetti@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); University of Trento, Department of Mathematics (Italy); Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy)

    2016-07-15

    This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.

  3. An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems

    KAUST Repository

    Kuwahara, Hiroyuki

    2011-01-01

    Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.

  4. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  5. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  6. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    Science.gov (United States)

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new

  7. Conditional Stochastic Models in Reduced Space: Towards Efficient Simulation of Tropical Cyclone Precipitation Patterns

    Science.gov (United States)

    Dodov, B.

    2017-12-01

    Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon

  8. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB

    KAUST Repository

    Klingbeil, G.

    2011-02-25

    Motivation: The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. Results: The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user\\'s models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. © The Author 2011. Published by Oxford University Press. All rights reserved.

  9. AESS: Accelerated Exact Stochastic Simulation

    Science.gov (United States)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  10. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  11. Simulation of Stochastic Loads for Fatigue Experiments

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Brincker, Rune

    1989-01-01

    process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and results in a simulation speed of about 3000 load cycles per second......A simple direct simulation method for stochastic fatigue-load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...

  12. Simulation of Stochastic Loads for Fatigue Experiments

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Brincker, Rune

    process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and is results in a simulation speed at about 3000 load cycles per......A simple direct simulation method for stochastic fatigue load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...

  13. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB

    KAUST Repository

    Klingbeil, G.; Erban, R.; Giles, M.; Maini, P. K.

    2011-01-01

    Motivation: The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new

  14. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Conclusion Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  15. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.

    Science.gov (United States)

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K

    2011-04-15

    The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.

  16. GillesPy: A Python Package for Stochastic Model Building and Simulation

    OpenAIRE

    Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.

    2016-01-01

    GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we descr...

  17. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  18. A low-bias simulation scheme for the SABR stochastic volatility model

    NARCIS (Netherlands)

    B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide

    2012-01-01

    htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)

  19. Stochastic congestion management in power markets using efficient scenario approaches

    International Nuclear Information System (INIS)

    Esmaili, Masoud; Amjady, Nima; Shayanfar, Heidar Ali

    2010-01-01

    Congestion management in electricity markets is traditionally performed using deterministic values of system parameters assuming a fixed network configuration. In this paper, a stochastic programming framework is proposed for congestion management considering the power system uncertainties comprising outage of generating units and transmission branches. The Forced Outage Rate of equipment is employed in the stochastic programming. Using the Monte Carlo simulation, possible scenarios of power system operating states are generated and a probability is assigned to each scenario. The performance of the ordinary as well as Lattice rank-1 and rank-2 Monte Carlo simulations is evaluated in the proposed congestion management framework. As a tradeoff between computation time and accuracy, scenario reduction based on the standard deviation of accepted scenarios is adopted. The stochastic congestion management solution is obtained by aggregating individual solutions of accepted scenarios. Congestion management using the proposed stochastic framework provides a more realistic solution compared with traditional deterministic solutions. Results of testing the proposed stochastic congestion management on the 24-bus reliability test system indicate the efficiency of the proposed framework.

  20. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  1. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  2. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    2013-05-01

    Full Text Available The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of action-potential arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic action potential, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  3. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  4. GillesPy: A Python Package for Stochastic Model Building and Simulation.

    Science.gov (United States)

    Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R

    2016-09-01

    GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.

  5. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  6. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  7. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  8. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  11. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  13. New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.

    Science.gov (United States)

    Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu

    2014-12-10

    The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.

  14. Stochastic simulation of enzyme-catalyzed reactions with disparate timescales.

    Science.gov (United States)

    Barik, Debashis; Paul, Mark R; Baumann, William T; Cao, Yang; Tyson, John J

    2008-10-01

    Many physiological characteristics of living cells are regulated by protein interaction networks. Because the total numbers of these protein species can be small, molecular noise can have significant effects on the dynamical properties of a regulatory network. Computing these stochastic effects is made difficult by the large timescale separations typical of protein interactions (e.g., complex formation may occur in fractions of a second, whereas catalytic conversions may take minutes). Exact stochastic simulation may be very inefficient under these circumstances, and methods for speeding up the simulation without sacrificing accuracy have been widely studied. We show that the "total quasi-steady-state approximation" for enzyme-catalyzed reactions provides a useful framework for efficient and accurate stochastic simulations. The method is applied to three examples: a simple enzyme-catalyzed reaction where enzyme and substrate have comparable abundances, a Goldbeter-Koshland switch, where a kinase and phosphatase regulate the phosphorylation state of a common substrate, and coupled Goldbeter-Koshland switches that exhibit bistability. Simulations based on the total quasi-steady-state approximation accurately capture the steady-state probability distributions of all components of these reaction networks. In many respects, the approximation also faithfully reproduces time-dependent aspects of the fluctuations. The method is accurate even under conditions of poor timescale separation.

  15. Efficient stochastic thermostatting of path integral molecular dynamics.

    Science.gov (United States)

    Ceriotti, Michele; Parrinello, Michele; Markland, Thomas E; Manolopoulos, David E

    2010-09-28

    The path integral molecular dynamics (PIMD) method provides a convenient way to compute the quantum mechanical structural and thermodynamic properties of condensed phase systems at the expense of introducing an additional set of high frequency normal modes on top of the physical vibrations of the system. Efficiently sampling such a wide range of frequencies provides a considerable thermostatting challenge. Here we introduce a simple stochastic path integral Langevin equation (PILE) thermostat which exploits an analytic knowledge of the free path integral normal mode frequencies. We also apply a recently developed colored noise thermostat based on a generalized Langevin equation (GLE), which automatically achieves a similar, frequency-optimized sampling. The sampling efficiencies of these thermostats are compared with that of the more conventional Nosé-Hoover chain (NHC) thermostat for a number of physically relevant properties of the liquid water and hydrogen-in-palladium systems. In nearly every case, the new PILE thermostat is found to perform just as well as the NHC thermostat while allowing for a computationally more efficient implementation. The GLE thermostat also proves to be very robust delivering a near-optimum sampling efficiency in all of the cases considered. We suspect that these simple stochastic thermostats will therefore find useful application in many future PIMD simulations.

  16. DEA-Risk Efficiency and Stochastic Dominance Efficiency of Stock Indices

    Czech Academy of Sciences Publication Activity Database

    Branda, M.; Kopa, Miloš

    2012-01-01

    Roč. 62, č. 2 (2012), s. 106-124 ISSN 0015-1920 R&D Projects: GA ČR GAP402/10/1610 Grant - others:GA ČR(CZ) GAP402/12/0558 Program:GA Institutional research plan: CEZ:AV0Z10750506 Keywords : Data Envelopment Analysis * Risk measures * Index efficiency * Stochastic dominance Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.340, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-dea-risk efficiency and stochastic dominance efficiency of stock indices.pdf

  17. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  18. Using Equation-Free Computation to Accelerate Network-Free Stochastic Simulation of Chemical Kinetics.

    Science.gov (United States)

    Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S

    2018-06-21

    The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.

  19. Simulation of anaerobic digestion processes using stochastic algorithm.

    Science.gov (United States)

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  20. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  1. MOSES: A Matlab-based open-source stochastic epidemic simulator.

    Science.gov (United States)

    Varol, Huseyin Atakan

    2016-08-01

    This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.

  2. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  3. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    International Nuclear Information System (INIS)

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy

  4. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento (Italy)

    2014-10-07

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.

  5. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  6. Stochastic analysis for finance with simulations

    CERN Document Server

    Choe, Geon Ho

    2016-01-01

    This book is an introduction to stochastic analysis and quantitative finance; it includes both theoretical and computational methods. Topics covered are stochastic calculus, option pricing, optimal portfolio investment, and interest rate models. Also included are simulations of stochastic phenomena, numerical solutions of the Black–Scholes–Merton equation, Monte Carlo methods, and time series. Basic measure theory is used as a tool to describe probabilistic phenomena. The level of familiarity with computer programming is kept to a minimum. To make the book accessible to a wider audience, some background mathematical facts are included in the first part of the book and also in the appendices. This work attempts to bridge the gap between mathematics and finance by using diagrams, graphs and simulations in addition to rigorous theoretical exposition. Simulations are not only used as the computational method in quantitative finance, but they can also facilitate an intuitive and deeper understanding of theoret...

  7. Dimension reduction of Karhunen-Loeve expansion for simulation of stochastic processes

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zixin; Peng, Yongbo

    2017-11-01

    Conventional Karhunen-Loeve expansions for simulation of stochastic processes often encounter the challenge of dealing with hundreds of random variables. For breaking through the barrier, a random function embedded Karhunen-Loeve expansion method is proposed in this paper. The updated scheme has a similar form to the conventional Karhunen-Loeve expansion, both involving a summation of a series of deterministic orthonormal basis and uncorrelated random variables. While the difference from the updated scheme lies in the dimension reduction of Karhunen-Loeve expansion through introducing random functions as a conditional constraint upon uncorrelated random variables. The random function is expressed as a single-elementary-random-variable orthogonal function in polynomial format (non-Gaussian variables) or trigonometric format (non-Gaussian and Gaussian variables). For illustrative purposes, the simulation of seismic ground motion is carried out using the updated scheme. Numerical investigations reveal that the Karhunen-Loeve expansion with random functions could gain desirable simulation results in case of a moderate sample number, except the Hermite polynomials and the Laguerre polynomials. It has the sound applicability and efficiency in simulation of stochastic processes. Besides, the updated scheme has the benefit of integrating with probability density evolution method, readily for the stochastic analysis of nonlinear structures.

  8. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  9. Measuring of Second-order Stochastic Dominance Portfolio Efficiency

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš

    2010-01-01

    Roč. 46, č. 3 (2010), s. 488-500 ISSN 0023-5954 R&D Projects: GA ČR GAP402/10/1610 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic dominance * stability * SSD porfolio efficiency Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/kopa-measuring of second-order stochastic dominance portfolio efficiency.pdf

  10. Stochastic series expansion simulation of the t -V model

    Science.gov (United States)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  11. Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems

    KAUST Repository

    Cotter, Simon L.; Vejchodský , Tomá š; Erban, Radek

    2013-01-01

    Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.

  12. Testing for Stochastic Dominance Efficiency

    NARCIS (Netherlands)

    G.T. Post (Thierry); O. Linton; Y-J. Whang

    2005-01-01

    textabstractWe propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our

  13. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  14. StochKit2: software for discrete stochastic simulation of biochemical systems with events.

    Science.gov (United States)

    Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R

    2011-09-01

    StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.

  15. Fast stochastic algorithm for simulating evolutionary population dynamics

    Science.gov (United States)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  16. Stochastic Simulation of Process Calculi for Biology

    Directory of Open Access Journals (Sweden)

    Andrew Phillips

    2010-10-01

    Full Text Available Biological systems typically involve large numbers of components with complex, highly parallel interactions and intrinsic stochasticity. To model this complexity, numerous programming languages based on process calculi have been developed, many of which are expressive enough to generate unbounded numbers of molecular species and reactions. As a result of this expressiveness, such calculi cannot rely on standard reaction-based simulation methods, which require fixed numbers of species and reactions. Rather than implementing custom stochastic simulation algorithms for each process calculus, we propose to use a generic abstract machine that can be instantiated to a range of process calculi and a range of reaction-based simulation algorithms. The abstract machine functions as a just-in-time compiler, which dynamically updates the set of possible reactions and chooses the next reaction in an iterative cycle. In this short paper we give a brief summary of the generic abstract machine, and show how it can be instantiated with the stochastic simulation algorithm known as Gillespie's Direct Method. We also discuss the wider implications of such an abstract machine, and outline how it can be used to simulate multiple calculi simultaneously within a common framework.

  17. An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays

    Directory of Open Access Journals (Sweden)

    Laurenzi Ian J

    2009-12-01

    Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.

  18. Stochastic simulations of the tetracycline operon

    Science.gov (United States)

    2011-01-01

    Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the interplay between its molecular

  19. Stochastic simulations of the tetracycline operon

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2011-01-01

    Full Text Available Abstract Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the

  20. Stochastic-shielding approximation of Markov chains and its application to efficiently simulate random ion-channel gating.

    Science.gov (United States)

    Schmandt, Nicolaus T; Galán, Roberto F

    2012-09-14

    Markov chains provide realistic models of numerous stochastic processes in nature. We demonstrate that in any Markov chain, the change in occupation number in state A is correlated to the change in occupation number in state B if and only if A and B are directly connected. This implies that if we are only interested in state A, fluctuations in B may be replaced with their mean if state B is not directly connected to A, which shortens computing time considerably. We show the accuracy and efficacy of our approximation theoretically and in simulations of stochastic ion-channel gating in neurons.

  1. DEA-Risk Efficiency and Stochastic Dominance Efficiency of Stock Indices

    OpenAIRE

    Martin Branda; Miloš Kopa

    2012-01-01

    In this article, the authors deal with the efficiency of world stock indices. Basically, they compare three approaches: mean-risk, data envelopment analysis (DEA), and stochastic dominance (SD) efficiency. In the DEA methodology, efficiency is defined as a weighted sum of outputs compared to a weighted sum of inputs when optimal weights are used. In DEA-risk efficiency, several risk measures and functionals which quantify the risk of the indices (var, VaR, CVaR, etc.) as DEA inputs are used. ...

  2. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  3. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  4. Stochastic models to simulate paratuberculosis in dairy herds

    DEFF Research Database (Denmark)

    Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad

    2011-01-01

    Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...

  5. Encoding efficiency of suprathreshold stochastic resonance on stimulus-specific information

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Fabing, E-mail: fabing.duan@gmail.com [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Chapeau-Blondeau, François, E-mail: chapeau@univ-angers.fr [Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), Université d' Angers, 62 avenue Notre Dame du Lac, 49000 Angers (France); Abbott, Derek, E-mail: derek.abbott@adelaide.edu.au [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)

    2016-01-08

    In this paper, we evaluate the encoding efficiency of suprathreshold stochastic resonance (SSR) based on a local information-theoretic measure of stimulus-specific information (SSI), which is the average specific information of responses associated with a particular stimulus. The theoretical and numerical analyses of SSIs reveal that noise can improve neuronal coding efficiency for a large population of neurons, which leads to produce increased information-rich responses. The SSI measure, in contrast to the global measure of average mutual information, can characterize the noise benefits in finer detail for describing the enhancement of neuronal encoding efficiency of a particular stimulus, which may be of general utility in the design and implementation of a SSR coding scheme. - Highlights: • Evaluating the noise-enhanced encoding efficiency via stimulus-specific information. • New form of stochastic resonance based on the measure of encoding efficiency. • Analyzing neural encoding schemes from suprathreshold stochastic resonance detailedly.

  6. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    Science.gov (United States)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  7. Stochastic simulation of karst conduit networks

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José

    2012-01-01

    Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when

  8. Simple stochastic simulation.

    Science.gov (United States)

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  9. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  10. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  11. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    Science.gov (United States)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  12. URDME: a modular framework for stochastic simulation of reaction-transport processes in complex geometries.

    Science.gov (United States)

    Drawert, Brian; Engblom, Stefan; Hellander, Andreas

    2012-06-22

    Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at

  13. Improved operating strategies for uranium extraction: a stochastic simulation

    International Nuclear Information System (INIS)

    Broekman, B.R.

    1986-01-01

    Deterministic and stochastic simulations of a Western Transvaal uranium process are used in this research report to determine more profitable uranium plant operating strategies and to gauge the potential financial benefits of automatic process control. The deterministic simulation model was formulated using empirical and phenomenological process models. The model indicated that profitability increases significantly as the uranium leaching strategy becomes harsher. The stochastic simulation models use process variable distributions corresponding to manually and automatically controlled conditions to investigate the economic gains that may be obtained if a change is made from manual to automatic control of two important process variables. These lognormally distributed variables are the pachuca 1 sulphuric acid concentration and the ferric to ferrous ratio. The stochastic simulations show that automatic process control is justifiable in certain cases. Where the leaching strategy is relatively harsh, such as that in operation during January 1986, it is not possible to justify an automatic control system. Automatic control is, however, justifiable if a relatively mild leaching strategy is adopted. The stochastic and deterministic simulations represent two different approaches to uranium process modelling. This study has indicated the necessity for each approach to be applied in the correct context. It is contended that incorrect conclusions may have been drawn by other investigators in South Africa who failed to consider the two approaches separately

  14. Simulation and inference for stochastic processes with YUIMA a comprehensive R framework for SDEs and other stochastic processes

    CERN Document Server

    Iacus, Stefano M

    2018-01-01

    The YUIMA package is the first comprehensive R framework based on S4 classes and methods which allows for the simulation of stochastic differential equations driven by Wiener process, Lévy processes or fractional Brownian motion, as well as CARMA processes. The package performs various central statistical analyses such as quasi maximum likelihood estimation, adaptive Bayes estimation, structural change point analysis, hypotheses testing, asynchronous covariance estimation, lead-lag estimation, LASSO model selection, and so on. YUIMA also supports stochastic numerical analysis by fast computation of the expected value of functionals of stochastic processes through automatic asymptotic expansion by means of the Malliavin calculus. All models can be multidimensional, multiparametric or non parametric.The book explains briefly the underlying theory for simulation and inference of several classes of stochastic processes and then presents both simulation experiments and applications to real data. Although these ...

  15. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    Science.gov (United States)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  16. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    International Nuclear Information System (INIS)

    Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing

    2017-01-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)

  17. Efficient Diversification According to Stochastic Dominance Criteria

    NARCIS (Netherlands)

    Kuosmanen, T.K.

    2004-01-01

    This paper develops the first operational tests of portfolio efficiency based on the general stochastic dominance (SD) criteria that account for an infinite set of diversification strategies. The main insight is to preserve the cross-sectional dependence of asset returns when forming portfolios by

  18. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    International Nuclear Information System (INIS)

    Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-01-01

    of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth. - Highlights: • A hybrid method for stochastic multi-scale models of cells populations that extends existing hybrid methods for reaction–diffusion system. • Our analysis unveils non-trivial macroscopic effects triggered by noise at the level of structuring variables. • Our hybrid method hugely speeds up age-structured SSA simulations while preserving stochastic effects.

  19. Monte Carlo simulation of fully Markovian stochastic geometries

    International Nuclear Information System (INIS)

    Lepage, Thibaut; Delaby, Lucie; Malvagi, Fausto; Mazzolo, Alain

    2010-01-01

    The interest in resolving the equation of transport in stochastic media has continued to increase these last years. For binary stochastic media it is often assumed that the geometry is Markovian, which is never the case in usual environments. In the present paper, based on rigorous mathematical theorems, we construct fully two-dimensional Markovian stochastic geometries and we study their main properties. In particular, we determine a percolation threshold p c , equal to 0.586 ± 0.0015 for such geometries. Finally, Monte Carlo simulations are performed through these geometries and the results compared to homogeneous geometries. (author)

  20. Stochastic simulation of biological reactions, and its applications for studying actin polymerization.

    Science.gov (United States)

    Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru

    2010-11-30

    Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P(r) is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis-Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca²(+) dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events.

  1. Stochastic simulation of biological reactions, and its applications for studying actin polymerization

    International Nuclear Information System (INIS)

    Ichikawa, Kazuhisa; Suzuki, Takashi; Murata, Noboru

    2010-01-01

    Molecular events in biological cells occur in local subregions, where the molecules tend to be small in number. The cytoskeleton, which is important for both the structural changes of cells and their functions, is also a countable entity because of its long fibrous shape. To simulate the local environment using a computer, stochastic simulations should be run. We herein report a new method of stochastic simulation based on random walk and reaction by the collision of all molecules. The microscopic reaction rate P r is calculated from the macroscopic rate constant k. The formula involves only local parameters embedded for each molecule. The results of the stochastic simulations of simple second-order, polymerization, Michaelis–Menten-type and other reactions agreed quite well with those of deterministic simulations when the number of molecules was sufficiently large. An analysis of the theory indicated a relationship between variance and the number of molecules in the system, and results of multiple stochastic simulation runs confirmed this relationship. We simulated Ca 2+ dynamics in a cell by inward flow from a point on the cell surface and the polymerization of G-actin forming F-actin. Our results showed that this theory and method can be used to simulate spatially inhomogeneous events

  2. Stochastic simulation of off-shore oil terminal systems

    International Nuclear Information System (INIS)

    Frankel, E.G.; Oberle, J.

    1991-01-01

    To cope with the problem of uncertainty and conditionality in the planning, design, and operation of offshore oil transshipment terminal systems, a conditional stochastic simulation approach is presented. Examples are shown, using SLAM II, a computer simulation language based on GERT, a conditional stochastic network analysis methodology in which use of resources such as time and money are expressed by the moment generating function of the statistics of the resource requirements. Similarly each activity has an associated conditional probability of being performed and/or of requiring some of the resources. The terminal system is realistically represented by modelling the statistics of arrivals, loading and unloading times, uncertainties in costs and availabilities, etc

  3. Efficiency in the Community College Sector: Stochastic Frontier Analysis

    Science.gov (United States)

    Agasisti, Tommaso; Belfield, Clive

    2017-01-01

    This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…

  4. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    Science.gov (United States)

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  5. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  6. Productive efficiency of tea industry: A stochastic frontier approach

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... Key words: Technical efficiency, stochastic frontier, translog ... present low performance of the tea industry in Bangladesh. ... The Technical inefficiency effect .... administrative, technical, clerical, sales and purchase staff.

  7. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    Science.gov (United States)

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  8. Efficient simulation of the spatial transmission dynamics of influenza.

    Directory of Open Access Journals (Sweden)

    Meng-Tsung Tsai

    2010-11-01

    Full Text Available Early data from the 2009 H1N1 pandemic (H1N1pdm suggest that previous studies over-estimated the within-country rate of spatial spread of pandemic influenza. As large spatially resolved data sets are constructed, the need for efficient simulation code with which to investigate the spatial patterns of the pandemic becomes clear. Here, we present a significant improvement to the efficiency of an individual-based stochastic disease simulation framework commonly used in multiple previous studies. We quantify the efficiency of the revised algorithm and present an alternative parameterization of the model in terms of the basic reproductive number. We apply the model to the population of Taiwan and demonstrate how the location of the initial seed can influence spatial incidence profiles and the overall spread of the epidemic. Differences in incidence are driven by the relative connectivity of alternate seed locations. The ability to perform efficient simulation allows us to run a batch of simulations and take account of their average in real time. The averaged data are stable and can be used to differentiate spreading patterns that are not readily seen by only conducting a few runs.

  9. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Science.gov (United States)

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  10. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  11. Concordance measures and second order stochastic dominance-portfolio efficiency analysis

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš; Tichý, T.

    2012-01-01

    Roč. 15, č. 4 (2012), s. 110-120 ISSN 1212-3609 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : dependency * concordance * portfolio selection * second order stochastic dominance Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.633, year: 2012 http://library.utia.cas.cz/separaty/2013/E/kopa-concordance measures and second order stochastic dominance- portfolio efficiency analysis.pdf

  12. MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM

    Directory of Open Access Journals (Sweden)

    LIXIN LIU

    2014-01-01

    Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.

  13. Stochastic simulation modeling to determine time to detect Bovine Viral Diarrhea antibodies in bulk tank milk

    DEFF Research Database (Denmark)

    Foddai, Alessandro; Enøe, Claes; Krogh, Kaspar

    2014-01-01

    A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence of seroco......A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence......, which was the most efficient ELISA, could detect antibodiesin the BTM of a large herd 280 days (95% prediction interval: 218; 568) after a transientlyinfected (TI) milking cow has been introduced into the herd. The estimated time to detectionafter introduction of one PI calf was 111 days (44; 605...

  14. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  15. A higher-order numerical framework for stochastic simulation of chemical reaction systems.

    KAUST Repository

    Székely, Tamás

    2012-07-15

    BACKGROUND: In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. RESULTS: By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. CONCLUSIONS: Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved.

  16. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  17. Energy-Efficient FPGA-Based Parallel Quasi-Stochastic Computing

    Directory of Open Access Journals (Sweden)

    Ramu Seva

    2017-11-01

    Full Text Available The high performance of FPGA (Field Programmable Gate Array in image processing applications is justified by its flexible reconfigurability, its inherent parallel nature and the availability of a large amount of internal memories. Lately, the Stochastic Computing (SC paradigm has been found to be significantly advantageous in certain application domains including image processing because of its lower hardware complexity and power consumption. However, its viability is deemed to be limited due to its serial bitstream processing and excessive run-time requirement for convergence. To address these issues, a novel approach is proposed in this work where an energy-efficient implementation of SC is accomplished by introducing fast-converging Quasi-Stochastic Number Generators (QSNGs and parallel stochastic bitstream processing, which are well suited to leverage FPGA’s reconfigurability and abundant internal memory resources. The proposed approach has been tested on the Virtex-4 FPGA, and results have been compared with the serial and parallel implementations of conventional stochastic computation using the well-known SC edge detection and multiplication circuits. Results prove that by using this approach, execution time, as well as the power consumption are decreased by a factor of 3.5 and 4.5 for the edge detection circuit and multiplication circuit, respectively.

  18. GillespieSSA: Implementing the Gillespie Stochastic Simulation Algorithm in R

    Directory of Open Access Journals (Sweden)

    Mario Pineda-Krch

    2008-02-01

    Full Text Available The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. This article presents a simple-to-use and flexible framework for implementing the SSA using the high-level statistical computing language R and the package GillespieSSA. Using three ecological models as examples (logistic growth, Rosenzweig-MacArthur predator-prey model, and Kermack-McKendrick SIRS metapopulation model, this paper shows how a deterministic model can be formulated as a finite-population stochastic model within the framework of SSA theory and how it can be implemented in R. Simulations of the stochastic models are performed using four different SSA Monte Carlo methods: one exact method (Gillespie's direct method; and three approximate methods (explicit, binomial, and optimized tau-leap methods. Comparison of simulation results confirms that while the time-evolution trajectories obtained from the different SSA methods are indistinguishable, the approximate methods are up to four orders of magnitude faster than the exact methods.

  19. An efficient forward-reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Vilanova, Pedro

    2016-01-07

    In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  20. HYDRASTAR - a code for stochastic simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1992-05-01

    The computer code HYDRASTAR was developed as a tool for groundwater flow and transport simulations in the SKB 91 safety analysis project. Its conceptual ideas can be traced back to a report by Shlomo Neuman in 1988, see the reference section. The main idea of the code is the treatment of the rock as a stochastic continuum which separates it from the deterministic methods previously employed by SKB and also from the discrete fracture models. The current report is a comprehensive description of HYDRASTAR including such topics as regularization or upscaling of a hydraulic conductivity field, unconditional and conditional simulation of stochastic processes, numerical solvers for the hydrology and streamline equations and finally some proposals for future developments

  1. Hybrid framework for the simulation of stochastic chemical kinetics

    International Nuclear Information System (INIS)

    Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos

    2016-01-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  2. Hybrid framework for the simulation of stochastic chemical kinetics

    Science.gov (United States)

    Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos

    2016-12-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  3. Hybrid framework for the simulation of stochastic chemical kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk [Department of Mathematics, Imperial College, South Kensington Campus, London, SW7 2AZ (United Kingdom); Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk [School of Mathematics, University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD (United Kingdom)

    2016-12-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  4. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  5. MCdevelop - a universal framework for Stochastic Simulations

    Science.gov (United States)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http

  6. Inference from concave stochastic frontiers and the covariance of firm efficiency measures across firms

    International Nuclear Information System (INIS)

    Dashti, Imad

    2003-01-01

    This paper uses a Bayesian stochastic frontier model to obtain confidence intervals on firm efficiency measures of electric utilities rather than the point estimates reported in most previous studies. Results reveal that the stochastic frontier model yields imprecise measures of firm efficiency. However, the application produces much more precise inference on pairwise efficiency comparisons of firms due to a sometimes strong positive covariance of efficiency measures across firms. In addition, we examine the sensitivity to functional form by repeating the analysis for Cobb-Douglas, translog and Fourier frontiers, with and without imposing monotonicity and concavity

  7. Exact and Approximate Stochastic Simulation of Intracellular Calcium Dynamics

    Directory of Open Access Journals (Sweden)

    Nicolas Wieder

    2011-01-01

    pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  8. Stochastic simulation of PWR vessel integrity for pressurized thermal shock conditions

    International Nuclear Information System (INIS)

    Jackson, P.S.; Moelling, D.S.

    1984-01-01

    A stochastic simulation methodology is presented for performing probabilistic analyses of Pressurized Water Reactor vessel integrity. Application of the methodology to vessel-specific integrity analyses is described in the context of Pressurized Thermal Shock (PTS) conditions. A Bayesian method is described for developing vessel-specific models of the density of undetected volumetric flaws from ultrasonic inservice inspection results. Uncertainty limits on the probabilistic results due to sampling errors are determined from the results of the stochastic simulation. An example is provided to illustrate the methodology

  9. A stochastic view on column efficiency.

    Science.gov (United States)

    Gritti, Fabrice

    2018-03-09

    A stochastic model of transcolumn eddy dispersion along packed beds was derived. It was based on the calculation of the mean travel time of a single analyte molecule from one radial position to another. The exchange mechanism between two radial positions was governed by the transverse dispersion of the analyte across the column. The radial velocity distribution was obtained by flow simulations in a focused-ion-beam scanning electron microscopy (FIB-SEM) based 3D reconstruction from a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 particles. Accordingly, the packed bed was divided into three coaxial and uniform zones: (1) a 1.4 particle diameter wide, ordered, and loose packing at the column wall (velocity u w ), (2) an intermediate 130 μm wide, random, and dense packing (velocity u i ), and (3) the bulk packing in the center of the column (velocity u c ). First, the validity of this proposed stochastic model was tested by adjusting the predicted to the observed reduced van Deemter plots of a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 fully porous particles (FPPs). An excellent agreement was found for u i  = 0.93u c , a result fully consistent with the FIB-SEM observation (u i  = 0.95u c ). Next, the model was used to measure u i  = 0.94u c for 2.1 mm × 100 mm column packed with 1.6 μm Cortecs-C 18 superficially porous particles (SPPs). The relative velocity bias across columns packed with SPPs is then barely smaller than that observed in columns packed with FPPs (+6% versus + 7%). u w =1.8u i is measured for a 75 μm × 1 m capillary column packed with 2 μm BEH-C 18 particles. Despite this large wall-to-center velocity bias (+80%), the presence of the thin and ordered wall packing layer has no negative impact on the kinetic performance of capillary columns. Finally, the stochastic model of long-range eddy dispersion explains why analytical (2.1-4.6 mm i.d.) and capillary (columns can all be

  10. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  11. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    International Nuclear Information System (INIS)

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  12. A primer on stochastic epidemic models: Formulation, numerical simulation, and analysis

    Directory of Open Access Journals (Sweden)

    Linda J.S. Allen

    2017-05-01

    Full Text Available Some mathematical methods for formulation and numerical simulation of stochastic epidemic models are presented. Specifically, models are formulated for continuous-time Markov chains and stochastic differential equations. Some well-known examples are used for illustration such as an SIR epidemic model and a host-vector malaria model. Analytical methods for approximating the probability of a disease outbreak are also discussed. Keywords: Branching process, Continuous-time Markov chain, Minor outbreak, Stochastic differential equation, 2000 MSC: 60H10, 60J28, 92D30

  13. Simulation of nuclear plant operation into a stochastic energy production model

    International Nuclear Information System (INIS)

    Pacheco, R.L.

    1983-04-01

    A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt

  14. An efficient scenario-based stochastic programming framework for multi-objective optimal micro-grid operation

    International Nuclear Information System (INIS)

    Niknam, Taher; Azizipanah-Abarghooee, Rasoul; Narimani, Mohammad Rasoul

    2012-01-01

    Highlights: ► Proposes a stochastic model for optimal energy management. ► Consider uncertainties related to the forecasted values for load demand. ► Consider uncertainties of forecasted values of output power of wind and photovoltaic units. ► Consider uncertainties of forecasted values of market price. ► Present an improved multi-objective teaching–learning-based optimization. -- Abstract: This paper proposes a stochastic model for optimal energy management with the goal of cost and emission minimization. In this model, the uncertainties related to the forecasted values for load demand, available output power of wind and photovoltaic units and market price are modeled by a scenario-based stochastic programming. In the presented method, scenarios are generated by a roulette wheel mechanism based on probability distribution functions of the input random variables. Through this method, the inherent stochastic nature of the proposed problem is released and the problem is decomposed into a deterministic problem. An improved multi-objective teaching–learning-based optimization is implemented to yield the best expected Pareto optimal front. In the proposed stochastic optimization method, a novel self adaptive probabilistic modification strategy is offered to improve the performance of the presented algorithm. Also, a set of non-dominated solutions are stored in a repository during the simulation process. Meanwhile, the size of the repository is controlled by usage of a fuzzy-based clustering technique. The best expected compromise solution stored in the repository is selected via the niching mechanism in a way that solutions are encouraged to seek the lesser explored regions. The proposed framework is applied in a typical grid-connected micro grid in order to verify its efficiency and feasibility.

  15. Research on neutron noise analysis stochastic simulation method for α calculation

    International Nuclear Information System (INIS)

    Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang

    2014-01-01

    The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)

  16. A Simulation-Based Dynamic Stochastic Route Choice Model for Evacuation

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2012-01-01

    Full Text Available This paper establishes a dynamic stochastic route choice model for evacuation to simulate the propagation process of traffic flow and estimate the stochastic route choice under evacuation situations. The model contains a lane-group-based cell transmission model (CTM which sets different traffic capacities for links with different turning movements to flow out in an evacuation situation, an actual impedance model which is to obtain the impedance of each route in time units at each time interval and a stochastic route choice model according to the probit-based stochastic user equilibrium. In this model, vehicles loading at each origin at each time interval are assumed to choose an evacuation route under determinate road network, signal design, and OD demand. As a case study, the proposed model is validated on the network nearby Nanjing Olympic Center after the opening ceremony of the 10th National Games of the People's Republic of China. The traffic volumes and clearing time at five exit points of the evacuation zone are calculated by the model to compare with survey data. The results show that this model can appropriately simulate the dynamic route choice and evolution process of the traffic flow on the network in an evacuation situation.

  17. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model

    Science.gov (United States)

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-01

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  18. Simulation of the stochastic wave loads using a physical modeling approach

    DEFF Research Database (Denmark)

    Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...

  19. An Exploration Algorithm for Stochastic Simulators Driven by Energy Gradients

    Directory of Open Access Journals (Sweden)

    Anastasia S. Georgiou

    2017-06-01

    Full Text Available In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric. In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system.

  20. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  1. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  2. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  3. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  4. Software Tools for Stochastic Simulations of Turbulence

    Science.gov (United States)

    2015-08-28

    40] R. D. Richtmyer. Taylor instability in shock acceleration of compressible fluids. Comm. pure Appl. Math , 13(297-319), 1960. 76 [41] R. Samulyak, J...Research Triangle Park, NC 27709-2211 Pure sciences, Applied sciences, Front tracking, Large eddy simulations, Mesh convergence, Stochastic convergence, Weak...Illustration of a component grid with a front crossing solution stencil. Cells in the pure yellow and pure blue regions are assigned different components

  5. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    Science.gov (United States)

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  6. Explicit calibration and simulation of stochastic fields by low-order ARMA processes

    DEFF Research Database (Denmark)

    Krenk, Steen

    2011-01-01

    A simple framework for autoregressive simulation of stochastic fields is presented. The autoregressive format leads to a simple exponential correlation structure in the time-dimension. In the case of scalar processes a more detailed correlation structure can be obtained by adding memory...... to the process via an extension to autoregressive moving average (ARMA) processes. The ARMA format incorporates a more detailed correlation structure by including previous values of the simulated process. Alternatively, a more detailed correlation structure can be obtained by including additional 'state......-space' variables in the simulation. For a scalar process this would imply an increase of the dimension of the process to be simulated. In the case of a stochastic field the correlation in the time-dimension is represented, although indirectly, in the simultaneous spatial correlation. The model with the shortest...

  7. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  8. Stochastic Simulation Using @ Risk for Dairy Business Investment Decisions

    Science.gov (United States)

    A dynamic, stochastic, mechanistic simulation model of a dairy business was developed to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm system within a partial budgeting fram...

  9. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.; Chapman, S. J.; Erban, R.

    2011-01-01

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches

  10. Global CO2 efficiency: Country-wise estimates using a stochastic cost frontier

    International Nuclear Information System (INIS)

    Herrala, Risto; Goel, Rajeev K.

    2012-01-01

    This paper examines global carbon dioxide (CO 2 ) efficiency by employing a stochastic cost frontier analysis of about 170 countries in 1997 and 2007. The main contribution lies in providing a new approach to environmental efficiency estimation, in which the efficiency estimates quantify the distance from the policy objective of minimum emissions. We are able to examine a very large pool of nations and provide country-wise efficiency estimates. We estimate three econometric models, corresponding with alternative interpretations of the Cancun vision (Conference of the Parties 2011). The models reveal progress in global environmental efficiency during a preceding decade. The estimates indicate vast differences in efficiency levels, and efficiency changes across countries. The highest efficiency levels are observed in Africa and Europe, while the lowest are clustered around China. The largest efficiency gains were observed in central and eastern Europe. CO 2 efficiency also improved in the US and China, the two largest emitters, but their ranking in terms of CO 2 efficiency deteriorated. Policy implications are discussed. - Highlights: ► We estimate global environmental efficiency in line with the Cancun vision, using a stochastic cost frontier. ► The study covers 170 countries during a 10 year period, ending in 2007. ► The biggest improvements occurred in Europe, and efficiency falls in South America. ► The efficiency ranking of US and China, the largest emitters, deteriorated. ► In 2007, highest efficiency was observed in Africa and Europe, and the lowest around China.

  11. Stochastic simulation using @Risk for dairy business investment decisions

    NARCIS (Netherlands)

    Bewley, J.D.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.

    2010-01-01

    Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm

  12. An application of almost marginal conditional stochastic dominance (AMCSD) on forming efficient portfolios

    Science.gov (United States)

    Slamet, Isnandar; Mardiana Putri Carissa, Siska; Pratiwi, Hasih

    2017-10-01

    Investors always seek an efficient portfolio which is a portfolio that has a maximum return on specific risk or minimal risk on specific return. Almost marginal conditional stochastic dominance (AMCSD) criteria can be used to form the efficient portfolio. The aim of this research is to apply the AMCSD criteria to form an efficient portfolio of bank shares listed in the LQ-45. This criteria is used when there are areas that do not meet the criteria of marginal conditional stochastic dominance (MCSD). On the other words, this criteria can be derived from quotient of areas that violate the MCSD criteria with the area that violate and not violate the MCSD criteria. Based on the data bank stocks listed on LQ-45, it can be stated that there are 38 efficient portfolios of 420 portfolios where each portfolio comprises of 4 stocks and 315 efficient portfolios of 1710 portfolios with each of portfolio has 3 stocks.

  13. Stochastic Systems Uncertainty Quantification and Propagation

    CERN Document Server

    Grigoriu, Mircea

    2012-01-01

    Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: ·         A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis   ·          Probabilistic models for random variables an...

  14. Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.

    Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically-numerical tec......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically...

  15. Cyto-Sim: a formal language model and stochastic simulator of membrane-enclosed biochemical processes.

    Science.gov (United States)

    Sedwards, Sean; Mazza, Tommaso

    2007-10-15

    Compartments and membranes are the basis of cell topology and more than 30% of the human genome codes for membrane proteins. While it is possible to represent compartments and membrane proteins in a nominal way with many mathematical formalisms used in systems biology, few, if any, explicitly model the topology of the membranes themselves. Discrete stochastic simulation potentially offers the most accurate representation of cell dynamics. Since the details of every molecular interaction in a pathway are often not known, the relationship between chemical species in not necessarily best described at the lowest level, i.e. by mass action. Simulation is a form of computer-aided analysis, relying on human interpretation to derive meaning. To improve efficiency and gain meaning in an automatic way, it is necessary to have a formalism based on a model which has decidable properties. We present Cyto-Sim, a stochastic simulator of membrane-enclosed hierarchies of biochemical processes, where the membranes comprise an inner, outer and integral layer. The underlying model is based on formal language theory and has been shown to have decidable properties (Cavaliere and Sedwards, 2006), allowing formal analysis in addition to simulation. The simulator provides variable levels of abstraction via arbitrary chemical kinetics which link to ordinary differential equations. In addition to its compact native syntax, Cyto-Sim currently supports models described as Petri nets, can import all versions of SBML and can export SBML and MATLAB m-files. Cyto-Sim is available free, either as an applet or a stand-alone Java program via the web page (http://www.cosbi.eu/Rpty_Soft_CytoSim.php). Other versions can be made available upon request.

  16. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Science.gov (United States)

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  17. Hospital efficiency and transaction costs: a stochastic frontier approach.

    Science.gov (United States)

    Ludwig, Martijn; Groot, Wim; Van Merode, Frits

    2009-07-01

    The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.

  18. DEA models equivalent to general Nth order stochastic dominance efficiency tests

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin; Kopa, Miloš

    2016-01-01

    Roč. 44, č. 2 (2016), s. 285-289 ISSN 0167-6377 R&D Projects: GA ČR GA13-25911S; GA ČR GA15-00735S Grant - others:GA ČR(CZ) GA15-02938S Institutional support: RVO:67985556 Keywords : Nth order stochastic dominance efficiency * Data envelopment analysis * Convex NSD efficiency * NSD portfolio efficiency Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.657, year: 2016 http://library.utia.cas.cz/separaty/2016/E/branda-0458120.pdf

  19. Stochastic simulations of calcium contents in sugarcane area

    Directory of Open Access Journals (Sweden)

    Gener T. Pereira

    2015-08-01

    Full Text Available ABSTRACTThe aim of this study was to quantify and to map the spatial distribution and uncertainty of soil calcium (Ca content in a sugarcane area by sequential Gaussian and simulated-annealing simulation methods. The study was conducted in the municipality of Guariba, northeast of the São Paulo state. A sampling grid with 206 points separated by a distance of 50 m was established, totaling approximately 42 ha. The calcium contents were evaluated in layer of 0-0.20 m. Techniques of geostatistical estimation, ordinary kriging and stochastic simulations were used. The technique of ordinary kriging does not reproduce satisfactorily the global statistics of the Ca contents. The use of simulation techniques allows reproducing the spatial variability pattern of Ca contents. The techniques of sequential Gaussian simulation and simulated annealing showed significant variations in the contents of Ca in the small scale.

  20. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems

    Science.gov (United States)

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.

    2017-01-01

    Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987

  1. Fast stochastic simulation of biochemical reaction systems by alternative formulations of the chemical Langevin equation

    KAUST Repository

    Mélykúti, Bence; Burrage, Kevin; Zygalakis, Konstantinos C.

    2010-01-01

    The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when

  2. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist.

    Directory of Open Access Journals (Sweden)

    Brian Drawert

    2016-12-01

    Full Text Available We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.

  3. The ISI distribution of the stochastic Hodgkin-Huxley neuron.

    Science.gov (United States)

    Rowat, Peter F; Greenwood, Priscilla E

    2014-01-01

    The simulation of ion-channel noise has an important role in computational neuroscience. In recent years several approximate methods of carrying out this simulation have been published, based on stochastic differential equations, and all giving slightly different results. The obvious, and essential, question is: which method is the most accurate and which is most computationally efficient? Here we make a contribution to the answer. We compare interspike interval histograms from simulated data using four different approximate stochastic differential equation (SDE) models of the stochastic Hodgkin-Huxley neuron, as well as the exact Markov chain model simulated by the Gillespie algorithm. One of the recent SDE models is the same as the Kurtz approximation first published in 1978. All the models considered give similar ISI histograms over a wide range of deterministic and stochastic input. Three features of these histograms are an initial peak, followed by one or more bumps, and then an exponential tail. We explore how these features depend on deterministic input and on level of channel noise, and explain the results using the stochastic dynamics of the model. We conclude with a rough ranking of the four SDE models with respect to the similarity of their ISI histograms to the histogram of the exact Markov chain model.

  4. Numerical Solution of Stochastic Nonlinear Fractional Differential Equations

    KAUST Repository

    El-Beltagy, Mohamed A.

    2015-01-07

    Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.

  5. Numerical Solution of Stochastic Nonlinear Fractional Differential Equations

    KAUST Repository

    El-Beltagy, Mohamed A.; Al-Juhani, Amnah

    2015-01-01

    Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.

  6. Simulation of quantum dynamics based on the quantum stochastic differential equation.

    Science.gov (United States)

    Li, Ming

    2013-01-01

    The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.

  7. Simulation of Stochastic Processes by Coupled ODE-PDE

    Science.gov (United States)

    Zak, Michail

    2008-01-01

    A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.

  8. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  9. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  10. Operational Efficiency Forecasting Model of an Existing Underground Mine Using Grey System Theory and Stochastic Diffusion Processes

    Directory of Open Access Journals (Sweden)

    Svetlana Strbac Savic

    2015-01-01

    Full Text Available Forecasting the operational efficiency of an existing underground mine plays an important role in strategic planning of production. Degree of Operating Leverage (DOL is used to express the operational efficiency of production. The forecasting model should be able to involve common time horizon, taking the characteristics of the input variables that directly affect the value of DOL. Changes in the magnitude of any input variable change the value of DOL. To establish the relationship describing the way of changing we applied multivariable grey modeling. Established time sequence multivariable response formula is also used to forecast the future values of operating leverage. Operational efficiency of production is often associated with diverse sources of uncertainties. Incorporation of these uncertainties into multivariable forecasting model enables mining company to survive in today’s competitive environment. Simulation of mean reversion process and geometric Brownian motion is used to describe the stochastic diffusion nature of metal price, as a key element of revenues, and production costs, respectively. By simulating a forecasting model, we imitate its action in order to measure its response to different inputs. The final result of simulation process is the expected value of DOL for every year of defined time horizon.

  11. An improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions

    CERN Document Server

    Kraft, M

    2003-01-01

    We propose an improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions. By combining forward and reverse reaction rates, a significant gain in computational efficiency is achieved. Two modifications of modelling the temperature dependence (with and without conservation of enthalpy) are introduced and studied quantitatively. The algorithm is tested for the combustion of n-heptane, which is a reference fuel component for internal combustion engines. The convergence of the algorithm is studied by a series of numerical experiments and the computational cost of the stochastic algorithm is compared with the DAE code DASSL. If less accuracy is needed the stochastic algorithm is faster on short simulation time intervals. The new stochastic algorithm is significantly faster than the original direct simulation algorithm in all cases considered.

  12. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address

  13. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  14. Comparison of stochastic models in Monte Carlo simulation of coated particle fuels

    International Nuclear Information System (INIS)

    Yu Hui; Nam Zin Cho

    2013-01-01

    There is growing interest worldwide in very high temperature gas cooled reactors as candidates for next generation reactor systems. For design and analysis of such reactors with double heterogeneity introduced by the coated particle fuels that are randomly distributed in graphite pebbles, stochastic transport models are becoming essential. Several models were reported in the literature, such as coarse lattice models, fine lattice stochastic (FLS) models, random sequential addition (RSA) models, metropolis models. The principles and performance of these stochastic models are described and compared in this paper. Compared with the usual fixed lattice methods, sub-FLS modeling allows more realistic stochastic distribution of fuel particles and thus results in more accurate criticality calculation. Compared with the basic RSA method, sub-FLS modeling requires simpler and more efficient overlapping checking procedure. (authors)

  15. Stochastic simulation of ecohydrological interactions between vegetation and groundwater

    Science.gov (United States)

    Dwelle, M. C.; Ivanov, V. Y.; Sargsyan, K.

    2017-12-01

    The complex interactions between groundwater and vegetation in the Amazon rainforest may yield vital ecophysiological interactions in specific landscape niches such as buffering plant water stress during dry season or suppression of water uptake due to anoxic conditions. Representation of such processes is greatly impacted by both external and internal sources of uncertainty: inaccurate data and subjective choice of model representation. The models that can simulate these processes are complex and computationally expensive, and therefore make it difficult to address uncertainty using traditional methods. We use the ecohydrologic model tRIBS+VEGGIE and a novel uncertainty quantification framework applied to the ZF2 watershed near Manaus, Brazil. We showcase the capability of this framework for stochastic simulation of vegetation-hydrology dynamics. This framework is useful for simulation with internal and external stochasticity, but this work will focus on internal variability of groundwater depth distribution and model parameterizations. We demonstrate the capability of this framework to make inferences on uncertain states of groundwater depth from limited in situ data, and how the realizations of these inferences affect the ecohydrological interactions between groundwater dynamics and vegetation function. We place an emphasis on the probabilistic representation of quantities of interest and how this impacts the understanding and interpretation of the dynamics at the groundwater-vegetation interface.

  16. Analysing initial attack on wildland fires using stochastic simulation.

    Science.gov (United States)

    Jeremy S. Fried; J. Keith Gilless; James. Spero

    2006-01-01

    Stochastic simulation models of initial attack on wildland fire can be designed to reflect the complexity of the environmental, administrative, and institutional context in which wildland fire protection agencies operate, but such complexity may come at the cost of a considerable investment in data acquisition and management. This cost may be well justified when it...

  17. Stochastic calculus in physics

    International Nuclear Information System (INIS)

    Fox, R.F.

    1987-01-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations

  18. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    Science.gov (United States)

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  20. Market Efficiency of Oil Spot and Futures: A Stochastic Dominance Approach

    NARCIS (Netherlands)

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines the market efficiency of oil spot and futures prices by using a stochastic dominance (SD) approach. As there is no evidence of an SD relationship between oil spot and futures, we conclude that there is no arbitrage opportunity between these two markets, and that both

  1. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Science.gov (United States)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  2. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    Science.gov (United States)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  3. Simulating biological processes: stochastic physics from whole cells to colonies

    Science.gov (United States)

    Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  4. Stochastic simulation of regional groundwater flow in Beishan area

    International Nuclear Information System (INIS)

    Dong Yanhui; Li Guomin

    2010-01-01

    Because of the hydrogeological complexity, traditional thinking of aquifer characteristics is not appropriate for groundwater system in Beishan area. Uncertainty analysis of groundwater models is needed to examine the hydrologic effects of spatial heterogeneity. In this study, fast Fourier transform spectral method (FFTS) was used to generate the random horizontal permeability parameters. Depth decay and vertical anisotropy of hydraulic conductivity were included to build random permeability models. Based on high-performance computers, hundreds of groundwater flow models were simulated. Through stochastic simulations, the effect of heterogeneity to groundwater flow pattern was analyzed. (authors)

  5. Gompertzian stochastic model with delay effect to cervical cancer growth

    International Nuclear Information System (INIS)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah

    2015-01-01

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits

  6. Gompertzian stochastic model with delay effect to cervical cancer growth

    Energy Technology Data Exchange (ETDEWEB)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor and UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2015-02-03

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.

  7. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  8. Relationships of efficiency to reproductive disorders in Danish milk production: a stochastic frontier analysis.

    Science.gov (United States)

    Lawson, L G; Bruun, J; Coelli, T; Agger, J F; Lund, M

    2004-01-01

    Relationships of various reproductive disorders and milk production performance of Danish dairy farms were investigated. A stochastic frontier production function was estimated using data collected in 1998 from 514 Danish dairy farms. Measures of farm-level milk production efficiency relative to this production frontier were obtained, and relationships between milk production efficiency and the incidence risk of reproductive disorders were examined. There were moderate positive relationships between milk production efficiency and retained placenta, induction of estrus, uterine infections, ovarian cysts, and induction of birth. Inclusion of reproductive management variables showed that these moderate relationships disappeared, but directions of coefficients for almost all those variables remained the same. Dystocia showed a weak negative correlation with milk production efficiency. Farms that were mainly managed by young farmers had the highest average efficiency scores. The estimated milk losses due to inefficiency averaged 1142, 488, and 256 kg of energy-corrected milk per cow, respectively, for low-, medium-, and high-efficiency herds. It is concluded that the availability of younger cows, which enabled farmers to replace cows with reproductive disorders, contributed to high cow productivity in efficient farms. Thus, a high replacement rate more than compensates for the possible negative effect of reproductive disorders. The use of frontier production and efficiency/inefficiency functions to analyze herd data may enable dairy advisors to identify inefficient herds and to simulate the effect of alternative management procedures on the individual herd's efficiency.

  9. Development of random geometry capability in RMC code for stochastic media analysis

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2015-01-01

    Highlights: • Monte Carlo method plays an important role in modeling of particle transport in random media. • Three stochastic geometry modeling methods have been developed in RMC. • The stochastic effects of the randomly dispersed fuel particles are analyzed. • Investigation of accuracy and efficiency of three methods has been carried out. • All the methods are effective, and explicit modeling is regarded as the best choice. - Abstract: Simulation of particle transport in random media poses a challenge for traditional deterministic transport methods, due to the significant effects of spatial and energy self-shielding. Monte Carlo method plays an important role in accurate simulation of random media, owing to its flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been developed in RMC to simulate the particle transport in the dispersed fuels. The verifications of the accuracy and the investigations of the calculation efficiency have been carried out. The stochastic effects of the randomly dispersed fuel particles are also analyzed. The results show that all three stochastic geometry modeling methods can account for the effects of the random dispersion of fuel particles, and the explicit modeling method can be regarded as the best choice

  10. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  11. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  12. Database of Nucleon-Nucleon Scattering Cross Sections by Stochastic Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A database of nucleon-nucleon elastic differential and total cross sections will be generated by stochastic simulation of the quantum Liouville equation in the...

  13. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  14. Stochastic simulations for the time evolution of systems which obey generalized statistics: fractional exclusion statistics and Gentile's statistics

    International Nuclear Information System (INIS)

    Nemnes, G A; Anghel, D V

    2010-01-01

    We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size

  15. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  16. Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models

    KAUST Repository

    Yücel, Abdulkadir C.

    2011-08-01

    Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material, geometrical, and electrical constants) and/or excitation on observables pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis (e.g. voltages across mission-critical circuit elements) [1]. In recent years, stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention [2, 3]. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use existing deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.

  17. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Science.gov (United States)

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  18. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  19. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  20. Efficient AM Algorithms for Stochastic ML Estimation of DOA

    Directory of Open Access Journals (Sweden)

    Haihua Chen

    2016-01-01

    Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.

  1. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K.

    2012-01-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system's size. © 2006 IEEE.

  2. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido

    2012-02-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.

  3. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    Science.gov (United States)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  4. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  5. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  6. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  7. INCLUDING RISK IN ECONOMIC FEASIBILITY ANALYSIS:A STOCHASTIC SIMULATION MODEL FOR BLUEBERRY INVESTMENT DECISIONS IN CHILE

    Directory of Open Access Journals (Sweden)

    GERMÁN LOBOS

    2015-12-01

    Full Text Available ABSTRACT The traditional method of net present value (NPV to analyze the economic profitability of an investment (based on a deterministic approach does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L. production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in

  8. Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach

    OpenAIRE

    Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard

    2013-01-01

    The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...

  9. Stochastic modelling to evaluate the economic efficiency of treatment of chronic subclinical mastitis

    NARCIS (Netherlands)

    Steeneveld, W.; Hogeveen, H.; Borne, van den B.H.P.; Swinkels, J.M.

    2006-01-01

    Treatment of subclinical mastitis is traditionally no common practice. However, some veterinarians regard treatment of some types of subclinical mastitis to be effective. The goal of this research was to develop a stochastic Monte Carlo simulation model to support decisions around treatment of

  10. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    Science.gov (United States)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  11. Stochastic simulation of grain growth during continuous casting

    International Nuclear Information System (INIS)

    Ramirez, A.; Carrillo, F.; Gonzalez, J.L.; Lopez, S.

    2006-01-01

    The evolution of microstructure is a very important topic in material science engineering because the solidification conditions of steel billets during continuous casting process affect directly the properties of the final products. In this paper a mathematical model is described in order to simulate the dendritic growth using data of real casting operations; here a combination of deterministic and stochastic methods was used as a function of the solidification time of every node in order to create a reconstruction about the morphology of cast structures

  12. Stochastic simulation of grain growth during continuous casting

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez, A. [Department of Aerounatical Engineering, S.E.P.I., E.S.I.M.E., IPN, Instituto Politecnico Nacional (Unidad Profesional Ticoman), Av. Ticoman 600, Col. Ticoman, C.P.07340 (Mexico)]. E-mail: adalop123@mailbanamex.com; Carrillo, F. [Department of Processing Materials, CICATA-IPN Unidad Altamira Tamps (Mexico); Gonzalez, J.L. [Department of Metallurgy and Materials Engineering, E.S.I.Q.I.E.-IPN (Mexico); Lopez, S. [Department of Molecular Engineering of I.M.P., AP 14-805 (Mexico)

    2006-04-15

    The evolution of microstructure is a very important topic in material science engineering because the solidification conditions of steel billets during continuous casting process affect directly the properties of the final products. In this paper a mathematical model is described in order to simulate the dendritic growth using data of real casting operations; here a combination of deterministic and stochastic methods was used as a function of the solidification time of every node in order to create a reconstruction about the morphology of cast structures.

  13. The potential for energy efficiency gains in the Canadian commercial building sector: A stochastic frontier study

    International Nuclear Information System (INIS)

    Buck, J.; Young, D.

    2007-01-01

    The achievement of energy efficiency in commercial buildings is a function of the activities undertaken, the technology in place, and the extent to which those technologies are used efficiently. We study the factors that affect efficient energy use in the Canadian commercial sector by applying a stochastic frontier approach to a cross-section of Canadian commercial buildings included in the Commercial and Institutional Building Energy Use Survey (CIBEUS). Structural and climate-control features of the buildings as well as climatic conditions are assumed to determine the location of the frontier, while management-related variables including such factors as ownership type and activities govern whether or not the maximally attainable efficiency along the frontier is achieved. Our results indicate that although, on average, buildings appear to be fairly efficient, certain types of operations are more likely than others to exhibit energy efficiencies that are significantly worse than average. These results, along with those related to the effects of physical characteristics on the stochastic efficiency frontier, suggest that there is scope for focused policy initiatives to increase energy efficiency in this sector

  14. ENVIRONMENT: a computational platform to stochastically simulate reacting and self-reproducing lipid compartments

    Science.gov (United States)

    Mavelli, Fabio; Ruiz-Mirazo, Kepa

    2010-09-01

    'ENVIRONMENT' is a computational platform that has been developed in the last few years with the aim to simulate stochastically the dynamics and stability of chemically reacting protocellular systems. Here we present and describe some of its main features, showing how the stochastic kinetics approach can be applied to study the time evolution of reaction networks in heterogeneous conditions, particularly when supramolecular lipid structures (micelles, vesicles, etc) coexist with aqueous domains. These conditions are of special relevance to understand the origins of cellular, self-reproducing compartments, in the context of prebiotic chemistry and evolution. We contrast our simulation results with real lab experiments, with the aim to bring together theoretical and experimental research on protocell and minimal artificial cell systems.

  15. Economic Reforms and Cost Efficiency of Coffee Farmers in Central Kenya: A Stochastic-Translog Approach

    NARCIS (Netherlands)

    Karanja, A.M.; Kuyvenhoven, A.; Moll, H.A.J.

    2007-01-01

    Work reported in this paper analyses the cost efficiency levels of small-holder coffee farmers in four districts in Central Province, Kenya. The level of efficiency is analysed using a stochastic cost frontier model based on household cross-sectional data collected in 1999 and 2000. The 200 surveyed

  16. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  17. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    Directory of Open Access Journals (Sweden)

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  18. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  19. Stabilizing simulations of complex stochastic representations for quantum dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Perret, C; Petersen, W P, E-mail: wpp@math.ethz.ch [Seminar for Applied Mathematics, ETH, Zurich (Switzerland)

    2011-03-04

    Path integral representations of quantum dynamics can often be formulated as stochastic differential equations (SDEs). In a series of papers, Corney and Drummond (2004 Phys. Rev. Lett. 93 260401), Deuar and Drummond (2001 Comput. Phys. Commun. 142 442-5), Drummond and Gardnier (1980 J. Phys. A: Math. Gen. 13 2353-68), Gardiner and Zoller (2004 Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics (Springer Series in Synergetics) 3rd edn (Berlin: Springer)) and Gilchrist et al (1997 Phys. Rev. A 55 3014-32) and their collaborators have derived SDEs from coherent states representations for density matrices. Computationally, these SDEs are attractive because they seem simple to simulate. They can be quite unstable, however. In this paper, we consider some of the instabilities and propose a few remedies. Particularly, because the variances of the simulated paths typically grow exponentially, the processes become de-localized in relatively short times. Hence, the issues of boundary conditions and stable integration methods become important. We use the Bose-Einstein Hamiltonian as an example. Our results reveal that it is possible to significantly extend integration times and show the periodic structure of certain functionals.

  20. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    Science.gov (United States)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method

  1. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Directory of Open Access Journals (Sweden)

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  2. Stochastic conditional intensity processes

    DEFF Research Database (Denmark)

    Bauwens, Luc; Hautsch, Nikolaus

    2006-01-01

    model allows for a wide range of (cross-)autocorrelation structures in multivariate point processes. The model is estimated by simulated maximum likelihood (SML) using the efficient importance sampling (EIS) technique. By modeling price intensities based on NYSE trading, we provide significant evidence......In this article, we introduce the so-called stochastic conditional intensity (SCI) model by extending Russell’s (1999) autoregressive conditional intensity (ACI) model by a latent common dynamic factor that jointly drives the individual intensity components. We show by simulations that the proposed...... for a joint latent factor and show that its inclusion allows for an improved and more parsimonious specification of the multivariate intensity process...

  3. Quantum simulation of a quantum stochastic walk

    Science.gov (United States)

    Govia, Luke C. G.; Taketani, Bruno G.; Schuhmacher, Peter K.; Wilhelm, Frank K.

    2017-03-01

    The study of quantum walks has been shown to have a wide range of applications in areas such as artificial intelligence, the study of biological processes, and quantum transport. The quantum stochastic walk (QSW), which allows for incoherent movement of the walker, and therefore, directionality, is a generalization on the fully coherent quantum walk. While a QSW can always be described in Lindblad formalism, this does not mean that it can be microscopically derived in the standard weak-coupling limit under the Born-Markov approximation. This restricts the class of QSWs that can be experimentally realized in a simple manner. To circumvent this restriction, we introduce a technique to simulate open system evolution on a fully coherent quantum computer, using a quantum trajectories style approach. We apply this technique to a broad class of QSWs, and show that they can be simulated with minimal experimental resources. Our work opens the path towards the experimental realization of QSWs on large graphs with existing quantum technologies.

  4. The efficiency of life insurance and family Takaful in Malaysia: Relative efficiency using the stochastic cost frontier analysis

    Science.gov (United States)

    Baharin, Roziana; Isa, Zaidi

    2013-04-01

    This paper focuses on the Stochastic cost Frontier Analysis (SFA) approach, in an attempt to measure the relationship between efficiency and organizational structure for Takaful and insurance operators in Malaysia's dual financial system. This study applied a flexible cost functional form i.e., Fourier Flexible Functional Form, for a sample consisting of 19 firms, chosen between 2002 and 2010, by employing the Battese and Coelli invariant efficiency model. The findings show that on average, there is a significant difference in cost efficiency between the Takaful industry and the insurance industry. It was found that Takaful has lower cost efficiency than conventional insurance, which shows that the organization form has an influence on efficiency. Overall, it was observed that the level of efficiency scores for both life insurance and family Takaful do not vary across time.

  5. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  6. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  7. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    Energy Technology Data Exchange (ETDEWEB)

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  8. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  9. Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

    Directory of Open Access Journals (Sweden)

    Florin-Catalin ENACHE

    2015-10-01

    Full Text Available The growing character of the cloud business has manifested exponentially in the last 5 years. The capacity managers need to concentrate on a practical way to simulate the random demands a cloud infrastructure could face, even if there are not too many mathematical tools to simulate such demands.This paper presents an introduction into the most important stochastic processes and queueing theory concepts used for modeling computer performance. Moreover, it shows the cases where such concepts are applicable and when not, using clear programming examples on how to simulate a queue, and how to use and validate a simulation, when there are no mathematical concepts to back it up.

  10. Natural tracer test simulation by stochastic particle tracking method

    International Nuclear Information System (INIS)

    Ackerer, P.; Mose, R.; Semra, K.

    1990-01-01

    Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)

  11. Numerical simulation of stochastic point kinetic equation in the dynamical system of nuclear reactor

    International Nuclear Information System (INIS)

    Saha Ray, S.

    2012-01-01

    Highlights: ► In this paper stochastic neutron point kinetic equations have been analyzed. ► Euler–Maruyama method and Strong Taylor 1.5 order method have been discussed. ► These methods are applied for the solution of stochastic point kinetic equations. ► Comparison between the results of these methods and others are presented in tables. ► Graphs for neutron and precursor sample paths are also presented. -- Abstract: In the present paper, the numerical approximation methods, applied to efficiently calculate the solution for stochastic point kinetic equations () in nuclear reactor dynamics, are investigated. A system of Itô stochastic differential equations has been analyzed to model the neutron density and the delayed neutron precursors in a point nuclear reactor. The resulting system of Itô stochastic differential equations are solved over each time-step size. The methods are verified by considering different initial conditions, experimental data and over constant reactivities. The computational results indicate that the methods are simple and suitable for solving stochastic point kinetic equations. In this article, a numerical investigation is made in order to observe the random oscillations in neutron and precursor population dynamics in subcritical and critical reactors.

  12. An efficient forward-reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Vilanova, Pedro

    2016-01-01

    reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ

  13. Simulation of Higher-Order Electrical Circuits with Stochastic Parameters via SDEs

    Directory of Open Access Journals (Sweden)

    BRANCIK, L.

    2013-02-01

    Full Text Available The paper deals with a technique for the simulation of higher-order electrical circuits with parameters varying randomly. The principle consists in the utilization of the theory of stochastic differential equations (SDE, namely the vector form of the ordinary SDEs. Random changes of both excitation voltage and some parameters of passive circuit elements are considered, and circuit responses are analyzed. The voltage and/or current responses are computed and represented in the form of the sample means accompanied by their confidence intervals to provide reliable estimates. The method is applied to analyze responses of the circuit models of optional orders, specially those consisting of a cascade connection of the RLGC networks. To develop the model equations the state-variable method is used, afterwards a corresponding vector SDE is formulated and a stochastic Euler numerical method applied. To verify the results the deterministic responses are also computed by the help of the PSpice simulator or the numerical inverse Laplace transforms (NILT procedure in MATLAB, while removing random terms from the circuit model.

  14. Some simulation aspects, from molecular systems to stochastic geometries of pebble bed reactors

    International Nuclear Information System (INIS)

    Mazzolo, A.

    2009-06-01

    After a brief presentation of his teaching and supervising activities, the author gives an overview of his research activities: investigation of atoms under high intensity magnetic field (investigation of the electronic structure under these fields), studies of theoretical and numerical electrochemistry (simulation coupling molecular dynamics and quantum calculations, comprehensive simulations of molecular dynamics), and studies relating stochastic geometry and neutron science

  15. arXiv Stochastic locality and master-field simulations of very large lattices

    CERN Document Server

    Lüscher, Martin

    2018-01-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  16. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    Science.gov (United States)

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  17. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan); Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610 (Belgium); Chen, W. [Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904 0495 (Japan)

    2016-08-07

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.

  18. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  19. Stochastic assessment of investment efficiency in a power system

    International Nuclear Information System (INIS)

    Davidov, Sreten; Pantoš, Miloš

    2017-01-01

    The assessment of investment efficiency plays a critical role in investment prioritization in the context of electrical network expansion planning. Hence, this paper proposes new criteria for the cost-efficiency investment applied in the investment ranking process in electrical network planning, based on the assessment of the new investment candidates impact on active-power losses, bus voltages and line loadings in the network. These three general criteria are chosen due to their strong economic influence when the active-power losses and line loadings are considered and due to their significant impact on quality of supply allowed for the voltage profile. Electrical network reliability of supply is not addressed, since, this criterion has already been extensively applied in other solutions regarding investment efficiency assessment. The proposed ranking procedure involves a stochastic approach applying the Monte Carlo method in the scenario preparation. The number of scenarios is further reduced by the K-MEANS procedure in order to speed up the investment efficiency assessment. The proposed ranking procedure is tested using the standard New England test system. The results show that based on the newly involved investment assessment criteria indices, system operators will obtain a prioritized list of investments that will prevent excessive and economically wasteful spending. - Highlights: • Active-Power Loss Investment Efficiency Index LEI. • Voltage Profile Investment Efficiency Index VEI. • Active-Power Flow Loading Mitigation Investment Efficiency Index PEI. • Optimization model for network expansion planning with new indices.

  20. Stochastic stresses in granular matter simulated by dripping identical ellipses into plane silo

    DEFF Research Database (Denmark)

    Berntsen, Kasper Nikolaj; Ditlevsen, Ove Dalager

    2000-01-01

    A two-dimensional silo pressure model-problem is investigated by molecular dynamics simulations. A plane silo container is filled by a granular matter consisting of congruent elliptic particles dropped one by one into the silo. A suitable energy absorbing contact force mechanism is activatedduring...... the granular matter in the silo are compared to thesolution of a stochastic equilibrium differential equation. In this equation the stochasticity source is a homogeneouswhite noise gamma-distributed side pressure factor field along the walls. This is a generalization of the deterministic side pressure factor...... proposed by Janssen in 1895. The stochastic Janssen factor model is shown to be fairly consistentwith the observations from which the mean and the intensity of the white noise is estimated by the method of maximumlikelihood using the properties of the gamma-distribution. Two wall friction coefficients...

  1. US residential energy demand and energy efficiency: A stochastic demand frontier approach

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.

    2012-01-01

    This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.

  2. Stochastic Simulation of Cardiac Ventricular Myocyte Calcium Dynamics and Waves

    OpenAIRE

    Tuan, Hoang-Trong Minh; Williams, George S. B.; Chikando, Aristide C.; Sobie, Eric A.; Lederer, W. Jonathan; Jafri, M. Saleet

    2011-01-01

    A three dimensional model of calcium dynamics in the rat ventricular myocyte was developed to study the mechanism of calcium homeostasis and pathological calcium dynamics during calcium overload. The model contains 20,000 calcium release units (CRUs) each containing 49 ryanodine receptors. The model simulates calcium sparks with a realistic spontaneous calcium spark rate. It suggests that in addition to the calcium spark-based leak, there is an invisible calcium leak caused by the stochastic ...

  3. Development of Fast-Time Stochastic Airport Ground and Runway Simulation Model and Its Traffic Analysis

    Directory of Open Access Journals (Sweden)

    Ryota Mori

    2015-01-01

    Full Text Available Airport congestion, in particular congestion of departure aircraft, has already been discussed by other researches. Most solutions, though, fail to account for uncertainties. Since it is difficult to remove uncertainties of the operations in the real world, a strategy should be developed assuming such uncertainties exist. Therefore, this research develops a fast-time stochastic simulation model used to validate various methods in order to decrease airport congestion level under existing uncertainties. The surface movement data is analyzed first, and the uncertainty level is obtained. Next, based on the result of data analysis, the stochastic simulation model is developed. The model is validated statistically and the characteristics of airport operation under existing uncertainties are investigated.

  4. Exploring energy efficiency in China's iron and steel industry: A stochastic frontier approach

    International Nuclear Information System (INIS)

    Lin, Boqiang; Wang, Xiaolei

    2014-01-01

    The iron and steel industry is one of the major energy-consuming industries in China. Given the limited research on effective energy conservation in China's industrial sectors, this paper analyzes the total factor energy efficiency and the corresponding energy conservation potential of China's iron and steel industry using the excessive energy-input stochastic frontier model. The results show that there was an increasing trend in energy efficiency between 2005 and 2011 with an average energy efficiency of 0.699 and a cumulative energy conservation potential of 723.44 million tons of coal equivalent (Mtce). We further analyze the regional differences in energy efficiency and find that energy efficiency of Northeastern China is high while that of Central and Western China is low. Therefore, there is a concentration of energy conservation potential for the iron and steel industry in the Central and Western areas. In addition, we discover that inefficient factors are important for improving energy conservation. We find that the structural defect in the economic system is an important impediment to energy efficiency and economic restructuring is the key to improving energy efficiency. - Highlights: • A stochastic frontier model is adopted to analyze energy efficiency. • Industry concentration and ownership structure are main factors affecting the non-efficiency. • Energy efficiency of China's iron and steel industry shows a fluctuating increase. • Regional differences of energy efficiency are further analyzed. • Future policy for energy conservation in China's iron and steel sector is suggested

  5. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    Science.gov (United States)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  6. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  7. A simple stochastic model for dipole moment fluctuations in numerical dynamo simulations

    Directory of Open Access Journals (Sweden)

    Domenico G. eMeduri

    2016-04-01

    Full Text Available Earth's axial dipole field changes in a complex fashion on many differenttime scales ranging from less than a year to tens of million years.Documenting, analysing, and replicating this intricate signalis a challenge for data acquisition, theoretical interpretation,and dynamo modelling alike. Here we explore whether axial dipole variationscan be described by the superposition of a slow deterministic driftand fast stochastic fluctuations, i.e. by a Langevin-type system.The drift term describes the time averaged behaviour of the axial dipole variations,whereas the stochastic part mimics complex flow interactions over convective time scales.The statistical behaviour of the system is described by a Fokker-Planck equation whichallows useful predictions, including the average rates of dipole reversals and excursions.We analyse several numerical dynamo simulations, most of which havebeen integrated particularly long in time, and also the palaeomagneticmodel PADM2M which covers the past 2 Myr.The results show that the Langevin description provides a viable statistical modelof the axial dipole variations on time scales longer than about 1 kyr.For example, the axial dipole probability distribution and the average reversalrate are successfully predicted.The exception is PADM2M where the stochastic model reversal rate seems too low.The dependence of the drift on the axial dipolemoment reveals the nonlinear interactions that establish thedynamo balance. A separate analysis of inductive and diffusive magnetic effectsin three dynamo simulations suggests that the classical quadraticquenching of induction predicted by mean-field theory seems at work.

  8. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    Science.gov (United States)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  9. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    Directory of Open Access Journals (Sweden)

    Jae Sang Moon

    2017-12-01

    Full Text Available Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES. Stochastic characteristics of these LES waked wind velocity field, including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study’s overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.

  10. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    Science.gov (United States)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  11. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  12. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  13. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  14. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  15. An efficient computational method for a stochastic dynamic lot-sizing problem under service-level constraints

    NARCIS (Netherlands)

    Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.

    2011-01-01

    We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method

  16. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  17. Time-ordered product expansions for computational stochastic system biology

    International Nuclear Information System (INIS)

    Mjolsness, Eric

    2013-01-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie’s stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems. (paper)

  18. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    International Nuclear Information System (INIS)

    Ehlert, Kurt; Loewe, Laurence

    2014-01-01

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise

  19. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  20. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis.

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-21

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  1. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  2. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    Science.gov (United States)

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  3. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  4. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  5. Stochastic Dominance and Omega Ratio: Measures to Examine Market Efficiency, Arbitrage Opportunity, and Anomaly

    Directory of Open Access Journals (Sweden)

    Xu Guo

    2017-10-01

    Full Text Available Both stochastic dominance and Omegaratio can be used to examine whether the market is efficient, whether there is any arbitrage opportunity in the market and whether there is any anomaly in the market. In this paper, we first study the relationship between stochastic dominance and the Omega ratio. We find that second-order stochastic dominance (SD and/or second-order risk-seeking SD (RSD alone for any two prospects is not sufficient to imply Omega ratio dominance insofar that the Omega ratio of one asset is always greater than that of the other one. We extend the theory of risk measures by proving that the preference of second-order SD implies the preference of the corresponding Omega ratios only when the return threshold is less than the mean of the higher return asset. On the other hand, the preference of the second-order RSD implies the preference of the corresponding Omega ratios only when the return threshold is larger than the mean of the smaller return asset. Nonetheless, first-order SD does imply Omega ratio dominance. Thereafter, we apply the theory developed in this paper to examine the relationship between property size and property investment in the Hong Kong real estate market. We conclude that the Hong Kong real estate market is not efficient and there are expected arbitrage opportunities and anomalies in the Hong Kong real estate market. Our findings are useful for investors and policy makers in real estate.

  6. Statistical Inference on Stochastic Dominance Efficiency. Do Omitted Risk Factors Explain the Size and Book-to-Market Effects?

    NARCIS (Netherlands)

    G.T. Post (Thierry)

    2003-01-01

    textabstractThis paper discusses statistical inference on the second-order stochastic dominance (SSD) efficiency of a given portfolio relative to all portfolios formed from a set of assets. We derive the asymptotic sampling distribution of the Post test statistic for SSD efficiency. Unfortunately, a

  7. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  8. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  9. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    Science.gov (United States)

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw  100 km).

  10. A micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław

    2017-01-01

    We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...

  11. Multi-fidelity stochastic collocation method for computation of statistical moments

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu [Department of Mathematics, University of Iowa, Iowa City, IA 52242 (United States); Linebarger, Erin M., E-mail: aerinline@sci.utah.edu [Department of Mathematics, University of Utah, Salt Lake City, UT 84112 (United States); Xiu, Dongbin, E-mail: xiu.16@osu.edu [Department of Mathematics, The Ohio State University, Columbus, OH 43210 (United States)

    2017-07-15

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  12. A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.

  13. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  14. STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L.) PASTURES IN MARIN, N. L., MEXICO

    OpenAIRE

    José Romualdo Martínez-López; Erasmo Gutierrez-Ornelas; Miguel Angel Barrera-Silva; Rafael Retes-López

    2014-01-01

    A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L.) and its dry matter intake by cattle, in Marín, NL, México. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of...

  15. MarkoLAB: A simulator to study ionic channel's stochastic behavior.

    Science.gov (United States)

    da Silva, Robson Rodrigues; Goroso, Daniel Gustavo; Bers, Donald M; Puglisi, José Luis

    2017-08-01

    Mathematical models of the cardiac cell have started to include markovian representations of the ionic channels instead of the traditional Hodgkin & Huxley formulations. There are many reasons for this: Markov models are not restricted to the idea of independent gates defining the channel, they allow more complex description with specific transitions between open, closed or inactivated states, and more importantly those states can be closely related to the underlying channel structure and conformational changes. We used the LabVIEW ® and MATLAB ® programs to implement the simulator MarkoLAB that allow a dynamical 3D representation of the markovian model of the channel. The Monte Carlo simulation was used to implement the stochastic transitions among states. The user can specify the voltage protocol by setting the holding potential, the step-to voltage and the duration of the stimuli. The most studied feature of a channel is the current flowing through it. This happens when the channel stays in the open state, but most of the time, as revealed by the low open probability values, the channel remains on the inactive or closed states. By focusing only when the channel enters or leaves the open state we are missing most of its activity. MarkoLAB proved to be quite useful to visualize the whole behavior of the channel and not only when the channel produces a current. Such dynamic representation provides more complete information about channel kinetics and will be a powerful tool to demonstrate the effect of gene mutations or drugs on the channel function. MarkoLAB provides an original way of visualizing the stochastic behavior of a channel. It clarifies concepts, such as recovery from inactivation, calcium- versus voltage-dependent inactivation, and tail currents. It is not restricted to ionic channels only but it can be extended to other transporters, such as exchangers and pumps. This program is intended as a didactical tool to illustrate the dynamical behavior of a

  16. Stochastic modelling to evaluate the economic efficiency of treatment of chronic subclinical mastitis

    OpenAIRE

    Steeneveld, W.; Hogeveen, H.; Borne, van den, B.H.P.; Swinkels, J.M.

    2006-01-01

    Treatment of subclinical mastitis is traditionally no common practice. However, some veterinarians regard treatment of some types of subclinical mastitis to be effective. The goal of this research was to develop a stochastic Monte Carlo simulation model to support decisions around treatment of chronic subclinical mastitis caused by Streptococcus uberis. Factors in the model include, amongst others, the probability of spontaneous cure, probability of the cow becoming clinically diseased, trans...

  17. Production and efficiency analysis with R

    CERN Document Server

    Behr, Andreas

    2015-01-01

    This textbook introduces essential topics and techniques in production and efficiency analysis and shows how to apply these methods using the statistical software R. Numerous small simulations lead to a deeper understanding of random processes assumed in the models and of the behavior of estimation techniques. Step-by-step programming provides an understanding of advanced approaches such as stochastic frontier analysis and stochastic data envelopment analysis. The text is intended for master students interested in empirical production and efficiency analysis. Readers are assumed to have a general background in production economics and econometrics, typically taught in introductory microeconomics and econometrics courses.

  18. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  19. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    Science.gov (United States)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  20. Stochastic Analysis of the Efficiency of a Wireless Power Transfer System Subject to Antenna Variability and Position Uncertainties.

    Science.gov (United States)

    Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries

    2016-07-19

    The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna's variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500.

  1. Stochastic Analysis of the Efficiency of a Wireless Power Transfer System Subject to Antenna Variability and Position Uncertainties

    Directory of Open Access Journals (Sweden)

    Marco Rossi

    2016-07-01

    Full Text Available The efficiency of a wireless power transfer (WPT system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM band at 2.45 GHz. First, we model the impact of the textile antenna’s variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500.

  2. Stochastic Analysis of the Efficiency of a Wireless Power Transfer System Subject to Antenna Variability and Position Uncertainties

    Science.gov (United States)

    Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries

    2016-01-01

    The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna’s variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500. PMID:27447632

  3. Simulating efficiently the evolution of DNA sequences.

    Science.gov (United States)

    Schöniger, M; von Haeseler, A

    1995-02-01

    Two menu-driven FORTRAN programs are described that simulate the evolution of DNA sequences in accordance with a user-specified model. This general stochastic model allows for an arbitrary stationary nucleotide composition and any transition-transversion bias during the process of base substitution. In addition, the user may define any hypothetical model tree according to which a family of sequences evolves. The programs suggest the computationally most inexpensive approach to generate nucleotide substitutions. Either reproducible or non-repeatable simulations, depending on the method of initializing the pseudo-random number generator, can be performed. The corresponding options are offered by the interface menu.

  4. Stochastic solution of population balance equations for reactor networks

    International Nuclear Information System (INIS)

    Menz, William J.; Akroyd, Jethro; Kraft, Markus

    2014-01-01

    This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time

  5. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  6. A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

    International Nuclear Information System (INIS)

    Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.

    2008-01-01

    A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient

  7. Bigraphical Languages and their Simulation

    DEFF Research Database (Denmark)

    Højsgaard, Espen

    -trivial. A key problem is that of matching: deciding if and how a reaction rule applies to a bigraph. In this dissertation, we study bigraphs and their simulation for two types of practical formal languages: programming languages and languages for cell biology. First, we study programming languages and binding...... bigraphs, a variant of bigraphs with a facility for modeling the binders found in most programming languages. We construct and implement a provably correct term-based matching algorithm resulting in the BPL Tool, a first tool for binding bigraphs, which provides facilities for modeling, simulation...... of stochastic behavior which is useful in cell biology. We generalize an efficient and scalable stochastic simulation algorithm for the k-calculus to bigraphs. For this purpose, we develop and implement a number of theories for (stochastic) bigraphs: a formulation of the theory that is amenable...

  8. A stochastic model for the simulation of wind turbine blades in static stall

    DEFF Research Database (Denmark)

    Bertagnolio, Franck; Rasmussen, Flemming; Sørensen, Niels N.

    2010-01-01

    The aim of this work is to improve aeroelastic simulation codes by accounting for the unsteady aerodynamic forces that a blade experiences in static stall. A model based on a spectral representation of the aerodynamic lift force is defined. The drag and pitching moment are derived using...... a conditional simulation technique for stochastic processes. The input data for the model can be collected either from measurements or from numerical results from a Computational Fluid Dynamics code for airfoil sections at constant angles of attack. An analysis of such data is provided, which helps to determine...

  9. Stochastic-field cavitation model

    International Nuclear Information System (INIS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-01-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations

  10. Stochastic-field cavitation model

    Science.gov (United States)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  11. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    Science.gov (United States)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  12. Stochastic optimization-based study of dimerization kinetics

    Indian Academy of Sciences (India)

    To this end, we study dimerization kinetics of protein as a model system. We follow the dimerization kinetics using a stochastic simulation algorithm and ... optimization; dimerization kinetics; sensitivity analysis; stochastic simulation ... tion in large molecules and clusters, or the design ..... An unbiased strategy of allocating.

  13. Stochastic multiresonance in coupled excitable FHN neurons

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-04-01

    In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.

  14. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  15. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite

    Science.gov (United States)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  16. Technical Efficiency of Thai Manufacturing SMEs: A Stochastic Frontier Analysis

    Directory of Open Access Journals (Sweden)

    Teerawat Charoenrat

    2013-03-01

    Full Text Available AbstractA major motivation of this study is to examine the factors that are the most important in contributing to the relatively poor efficiency performance of Thai manufacturing small and medium sized enterprises (SMEs. The results obtained will be significant in devising effective policies aimed at tackling this poor performance.This paper uses data on manufacturing SMEs in the North-eastern region of Thailand in 2007 as a case study, by applying a stochastic frontier analysis (SFA and a technical inefficiency effects model. The empirical results obtained indicate that the mean technical efficiency of all categories of manufacturing SMEs in theNorth-eastern region is 43%, implying that manufacturing SMEs have high levels of technical inefficiency in their production processes.Manufacturing SMEs in the North-eastern region are particularly labour-intensive. The empirical results of the technical inefficiency effects model suggest that skilled labour, the municipal area and ownership characteristics are important firm-specific factors affecting technical efficiency. The paper argues that the government should play a more substantial role in developing manufacturing SMEs in the North-eastern provinces through: providing training programs for employees and employers; encouraging a greater usage of capital and technology in the production process of SMEs; enhancing the efficiency of state-ownedenterprises; encouraging a wide range of ownership forms; and improving information and communications infrastructure.

  17. Testing the new stochastic neutronic code ANET in simulating safety important parameters

    International Nuclear Information System (INIS)

    Xenofontos, T.; Delipei, G.-K.; Savva, P.; Varvayanni, M.; Maillard, J.; Silva, J.; Catsaros, N.

    2017-01-01

    Highlights: • ANET is a new neutronics stochastic code. • Criticality calculations in both subcritical and critical nuclear systems of conventional design were conducted. • Simulations of thermal, lower epithermal and fast neutron fluence rates were performed. • Axial fission rate distributions in standard and MOX fuel pins were computed. - Abstract: ANET (Advanced Neutronics with Evolution and Thermal hydraulic feedback) is an under development Monte Carlo code for simulating both GEN II/III reactors as well as innovative nuclear reactor designs, based on the high energy physics code GEANT3.21 of CERN. ANET is built through continuous GEANT3.21 applicability amplifications, comprising the simulation of particles’ transport and interaction in low energy along with the accessibility of user-provided libraries and tracking algorithms for energies below 20 MeV, as well as the simulation of elastic and inelastic collision, capture and fission. Successive testing applications performed throughout the ANET development have been utilized to verify the new code capabilities. In this context the ANET reliability in simulating certain reactor parameters important to safety is here examined. More specifically the reactor criticality as well as the neutron fluence and fission rates are benchmarked and validated. The Portuguese Research Reactor (RPI) after its conversion to low enrichment in U-235 and the OECD/NEA VENUS-2 MOX international benchmark were considered appropriate for the present study, the former providing criticality and neutron flux data and the latter reaction rates. Concerning criticality benchmarking, the subcritical, Training Nuclear Reactor of the Aristotle University of Thessaloniki (TNR-AUTh) was also analyzed. The obtained results are compared with experimental data from the critical infrastructures and with computations performed by two different, well established stochastic neutronics codes, i.e. TRIPOLI-4.8 and MCNP5. Satisfactory agreement

  18. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    African Journals Online (AJOL)

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  19. Stochastic diffusion models for substitutable technological innovations

    NARCIS (Netherlands)

    Wang, L.; Hu, B.; Yu, X.

    2004-01-01

    Based on the analysis of firms' stochastic adoption behaviour, this paper first points out the necessity to build more practical stochastic models. And then, stochastic evolutionary models are built for substitutable innovation diffusion system. Finally, through the computer simulation of the

  20. Stochastic weighted particle methods for population balance equations

    International Nuclear Information System (INIS)

    Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2011-01-01

    Highlights: → Weight transfer functions for Monte Carlo simulation of coagulation. → Efficient support for single-particle growth processes. → Comparisons to analytic solutions and soot formation problems. → Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.

  1. Application of users’ light-switch stochastic models to dynamic energy simulation

    DEFF Research Database (Denmark)

    Camisassi, V.; Fabi, V.; Andersen, Rune Korsholm

    2015-01-01

    deterministic inputs, due to the uncertain nature of human behaviour. In this paper, new stochastic models of users’ interaction with artificial lighting systems are developed and implemented in the energy simulation software IDA ICE. They were developed from field measurements in an office building in Prague......The design of an innovative building should include building overall energy flows estimation. They are principally related to main six influencing factors (IEA-ECB Annex 53): climate, building envelope and equipment, operation and maintenance, occupant behaviour and indoor environment conditions...

  2. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  3. Measuring the Efficiency of a Hospital based on the Econometric Stochastic Frontier Analysis (SFA) Method.

    Science.gov (United States)

    Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad

    2016-02-01

    Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients' satisfaction, be considered in the future studies to assess hospitals' performances.

  4. 2–stage stochastic Runge–Kutta for stochastic delay differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Rosli, Norhayati; Jusoh Awang, Rahimah [Faculty of Industrial Science and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300, Gambang, Pahang (Malaysia); Bahar, Arifah; Yeak, S. H. [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2015-05-15

    This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.

  5. Simulating local measurements on a quantum many-body system with stochastic matrix product states

    DEFF Research Database (Denmark)

    Gammelmark, Søren; Mølmer, Klaus

    2010-01-01

    We demonstrate how to simulate both discrete and continuous stochastic evolutions of a quantum many-body system subject to measurements using matrix product states. A particular, but generally applicable, measurement model is analyzed and a simple representation in terms of matrix product operators...... is found. The technique is exemplified by numerical simulations of the antiferromagnetic Heisenberg spin-chain model subject to various instances of the measurement model. In particular, we focus on local measurements with small support and nonlocal measurements, which induce long-range correlations....

  6. Production and efficiency of large wildland fire suppression effort: A stochastic frontier analysis.

    Science.gov (United States)

    Katuwal, Hari; Calkin, David E; Hand, Michael S

    2016-01-15

    This study examines the production and efficiency of wildland fire suppression effort. We estimate the effectiveness of suppression resource inputs to produce controlled fire lines that contain large wildland fires using stochastic frontier analysis. Determinants of inefficiency are identified and the effects of these determinants on the daily production of controlled fire line are examined. Results indicate that the use of bulldozers and fire engines increase the production of controlled fire line, while firefighter crews do not tend to contribute to controlled fire line production. Production of controlled fire line is more efficient if it occurs along natural or built breaks, such as rivers and roads, and within areas previously burned by wildfires. However, results also indicate that productivity and efficiency of the controlled fire line are sensitive to weather, landscape and fire characteristics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Efficiency simulation of long neutron counter

    International Nuclear Information System (INIS)

    Hu Qingyuan; Li Bojun; Zhang De; Guo Hongsheng; Wang Dong; Yang Gaozhao; Si Fenni; Liu Jian

    2008-01-01

    In order to achieve the high efficiency and uniform sensitivity for neutrons with widely different energies, the efficiency of long boron trifluoride proportional counter imbedded in polyethylene moderator was simulated by MCNP code. The result shows that detective efficiency would increase with increasing moderator radius and response curve at higher energy would be ameliorated through adjusting the thickness of front moderator. Also we calculated the relative efficiencies for different energy of a detector whose efficiencies were calibrated on an accelerator. The simulated efficiency for D-D neutrons (2.4 MeV) is 75% of the efficiency for D-T neutrons (14.1 MeV), which is approximately agreed with experimental data, 61%. The validity of the simulated model was proved by the consistent results between calculation and experiment data. (authors)

  8. Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas

    Science.gov (United States)

    Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.

    2012-12-01

    Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.

  9. Stochastic Analysis with Financial Applications

    CERN Document Server

    Kohatsu-Higa, Arturo; Sheu, Shuenn-Jyi

    2011-01-01

    Stochastic analysis has a variety of applications to biological systems as well as physical and engineering problems, and its applications to finance and insurance have bloomed exponentially in recent times. The goal of this book is to present a broad overview of the range of applications of stochastic analysis and some of its recent theoretical developments. This includes numerical simulation, error analysis, parameter estimation, as well as control and robustness properties for stochastic equations. This book also covers the areas of backward stochastic differential equations via the (non-li

  10. Stochastic analysis of complex reaction networks using binomial moment equations.

    Science.gov (United States)

    Barzel, Baruch; Biham, Ofer

    2012-09-01

    The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.

  11. Model tracking dual stochastic controller design under irregular internal noises

    International Nuclear Information System (INIS)

    Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young

    2006-01-01

    Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation

  12. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    Science.gov (United States)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  13. Design and validation of a dynamic discrete event stochastic simulation model of mastitis control in dairy herds.

    Science.gov (United States)

    Allore, H G; Schruben, L W; Erb, H N; Oltenacu, P A

    1998-03-01

    A dynamic stochastic simulation model for discrete events, SIMMAST, was developed to simulate the effect of mastitis on the composition of the bulk tank milk of dairy herds. Intramammary infections caused by Streptococcus agalactiae, Streptococcus spp. other than Strep. agalactiae, Staphylococcus aureus, and coagulase-negative staphylococci were modeled as were the milk, fat, and protein test day solutions for individual cows, which accounted for the fixed effects of days in milk, age at calving, season of calving, somatic cell count (SCC), and random effects of test day, cow yield differences from herdmates, and autocorrelated errors. Probabilities for the transitions among various states of udder health (uninfected or subclinically or clinically infected) were calculated to account for exposure, heifer infection, spontaneous recovery, lactation cure, infection or cure during the dry period, month of lactation, parity, within-herd yields, and the number of quarters with clinical intramammary infection in the previous and current lactations. The stochastic simulation model was constructed using estimates from the literature and also using data from 164 herds enrolled with Quality Milk Promotion Services that each had bulk tank SCC between 500,000 and 750,000/ml. Model parameters and outputs were validated against a separate data file of 69 herds from the Northeast Dairy Herd Improvement Association, each with a bulk tank SCC that was > or = 500,000/ml. Sensitivity analysis was performed on all input parameters for control herds. Using the validated stochastic simulation model, the control herds had a stable time average bulk tank SCC between 500,000 and 750,000/ml.

  14. Stochastic model for simulating Souris River Basin precipitation, evapotranspiration, and natural streamflow

    Science.gov (United States)

    Kolars, Kelsey A.; Vecchia, Aldo V.; Ryberg, Karen R.

    2016-02-24

    The Souris River Basin is a 61,000-square-kilometer basin in the Provinces of Saskatchewan and Manitoba and the State of North Dakota. In May and June of 2011, record-setting rains were seen in the headwater areas of the basin. Emergency spillways of major reservoirs were discharging at full or nearly full capacity, and extensive flooding was seen in numerous downstream communities. To determine the probability of future extreme floods and droughts, the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, developed a stochastic model for simulating Souris River Basin precipitation, evapotranspiration, and natural (unregulated) streamflow. Simulations from the model can be used in future studies to simulate regulated streamflow, design levees, and other structures; and to complete economic cost/benefit analyses.Long-term climatic variability was analyzed using tree-ring chronologies to hindcast precipitation to the early 1700s and compare recent wet and dry conditions to earlier extreme conditions. The extended precipitation record was consistent with findings from the Devils Lake and Red River of the North Basins (southeast of the Souris River Basin), supporting the idea that regional climatic patterns for many centuries have consisted of alternating wet and dry climate states.A stochastic climate simulation model for precipitation, temperature, and potential evapotranspiration for the Souris River Basin was developed using recorded meteorological data and extended precipitation records provided through tree-ring analysis. A significant climate transition was seen around1970, with 1912–69 representing a dry climate state and 1970–2011 representing a wet climate state. Although there were some distinct subpatterns within the basin, the predominant differences between the two states were higher spring through early fall precipitation and higher spring potential evapotranspiration for the wet compared to the dry state.A water

  15. A Dynamic BI–Orthogonal Field Equation Approach to Efficient Bayesian Inversion

    Directory of Open Access Journals (Sweden)

    Tagade Piyush M.

    2017-06-01

    Full Text Available This paper proposes a novel computationally efficient stochastic spectral projection based approach to Bayesian inversion of a computer simulator with high dimensional parametric and model structure uncertainty. The proposed method is based on the decomposition of the solution into its mean and a random field using a generic Karhunen-Loève expansion. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spatial dimensions that are spectrally represented using respective orthogonal bases. In particular, the present paper investigates generalized polynomial chaos bases for the stochastic dimension and eigenfunction bases for the spatial dimension. Dynamic orthogonality is used to derive closed-form equations for the time evolution of mean, spatial and the stochastic fields. The resultant system of equations consists of a partial differential equation (PDE that defines the dynamic evolution of the mean, a set of PDEs to define the time evolution of eigenfunction bases, while a set of ordinary differential equations (ODEs define dynamics of the stochastic field. This system of dynamic evolution equations efficiently propagates the prior parametric uncertainty to the system response. The resulting bi-orthogonal expansion of the system response is used to reformulate the Bayesian inference for efficient exploration of the posterior distribution. The efficacy of the proposed method is investigated for calibration of a 2D transient diffusion simulator with an uncertain source location and diffusivity. The computational efficiency of the method is demonstrated against a Monte Carlo method and a generalized polynomial chaos approach.

  16. Experiments and stochastic simulations of lignite coal during pyrolysis and gasification

    International Nuclear Information System (INIS)

    Ahmed, I.I.; Gupta, A.K.

    2013-01-01

    Highlights: ► Lignite pyrolysis and gasification has been conducted in a semi batch reactor. ► The objective is to understand mechanism of syngas evolution during pyrolysis. ► Stochastic simulations of lignite pyrolysis were conducted using Gillespie algorithm. ► First order, single step mechanism failed to fit cumulative yield of hydrogen. ► Evolution of hydrogen via pyrolysis of gaseous hydrocarbon following bridges scission. -- Abstract: Lignite pyrolysis and gasification has been conducted in a semi batch reactor at reactor temperatures of 800–950 °C in 50 °C intervals. CO 2 has been used as the gasifying agent for gasification experiments. The objective of this investigation is to understand the mechanism of syngas evolution during pyrolysis and to unravel the effect of CO 2 on pyrolysis mechanism. Stochastic simulations of lignite pyrolysis have been conducted using Gillespie algorithm. Two reaction mechanisms have been used in the simulations; first order, single step mechanism and the FLASHCHAIN mechanism. The first order single step mechanism was successful in fitting cumulative yield of CO 2 , CO, CH 4 and other hydrocarbons (C n H m ). The first order, single step failed to fit the cumulative yield of hydrogen, which suggests a more complex mechanism for hydrogen evolution. Evolution of CO 2 , CO, CH 4 , C n H m and H 2 flow rates has been monitored. The only effect of CO 2 on pyrolysis mechanism is promotion of reverse water gas shift reaction for the experiments described here. Methane evolution extended for slightly longer time than other hydrocarbons and hydrogen evolution extended for a slightly longer time than methane. This indicated the evolution of hydrogen via further pyrolysis of aliphatic hydrocarbon. It is also suggested that this step occurs in series after aliphatic hydrocarbons evolution by bridges scission.

  17. Stochastic curtailment of questionnaires for three-level classification: Shortening the CES-D for assessing low, moderate, and high risk of depression

    NARCIS (Netherlands)

    Smits, N.; Finkelman, M.D.; Kelderman, H.

    2016-01-01

    In clinical assessment, efficient screeners are needed to ensure low respondent burden. In this article, Stochastic Curtailment (SC), a method for efficient computerized testing for classification into two classes for observable outcomes, was extended to three classes. In a post hoc simulation study

  18. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  19. Stochastic strong ground motion simulations for the intermediate-depth earthquakes of the south Aegean subduction zone

    Science.gov (United States)

    Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis

    2015-04-01

    We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the

  20. The effect of regulatory governance on efficiency of thermal power generation in India: A stochastic frontier analysis

    International Nuclear Information System (INIS)

    Ghosh, Ranjan; Kathuria, Vinish

    2016-01-01

    This paper investigates the impact of institutional quality – typified as regulatory governance – on the performance of thermal power plants in India. The Indian power sector was reformed in the early 1990s. However, reforms are effective only as much as the regulators are committed in ensuring that they are implemented. We hypothesize that higher the quality of regulation in a federal Indian state, higher is the efficiency of electric generation utilities. A translog stochastic frontier model is estimated using index of state-level independent regulation as one of the determinants of inefficiency. The dataset comprises a panel of 77 coal-based thermal power plants during the reform period covering over 70% of installed electricity generation capacity. The mean technical efficiency of 76.7% indicates there is wide scope for efficiency improvement in the sector. Results are robust to various model specifications and show that state-level regulators have positively impacted plant performance. Technical efficiency is sensitive to both unbundling of state utilities, and regulatory experience. The policy implication is that further reforms which empower independent regulators will have far reaching impacts on power sector performance. - Highlights: • The impact of regulatory governance on Indian generation efficiency is investigated. • Stochastic frontier analysis (SFA) on a panel dataset covering pre and post reform era. • Index of state-wise variation in regulation to explain inefficiency effects. • Results show improved but not very high technical efficiencies. • State-level regulation has positively impacted power plant performance.

  1. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  2. Stochastic Optimization of Wind Turbine Power Factor Using Stochastic Model of Wind Power

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic optimization algorithm that aims to minimize the expectation of the system power losses by controlling wind turbine (WT) power factors. This objective of the optimization is subject to the probability constraints of bus voltage and line current requirements....... The optimization algorithm utilizes the stochastic models of wind power generation (WPG) and load demand to take into account their stochastic variation. The stochastic model of WPG is developed on the basis of a limited autoregressive integrated moving average (LARIMA) model by introducing a crosscorrelation...... structure to the LARIMA model. The proposed stochastic optimization is carried out on a 69-bus distribution system. Simulation results confirm that, under various combinations of WPG and load demand, the system power losses are considerably reduced with the optimal setting of WT power factor as compared...

  3. Stochastic analysis of residential micro combined heat and power system

    DEFF Research Database (Denmark)

    Karami, H.; Sanjari, M. J.; Gooi, H. B.

    2017-01-01

    In this paper the combined heat and power functionality of a fuel-cell in a residential hybrid energy system, including a battery, is studied. The demand uncertainties are modeled by investigating the stochastic load behavior by applying Monte Carlo simulation. The colonial competitive algorithm...... algorithm. The optimized scheduling of different energy resources is listed in an efficient look-up table for all time intervals. The effects of time of use and the battery efficiency and its size are investigated on the operating cost of the hybrid energy system. The results of this paper are expected...

  4. Stochastic simulation of destruction processes in self-irradiated materials

    Directory of Open Access Journals (Sweden)

    T. Patsahan

    2017-09-01

    Full Text Available Self-irradiation damages resulting from fission processes are common phenomena observed in nuclear fuel containing (NFC materials. Numerous α-decays lead to local structure transformations in NFC materials. The damages appearing due to the impacts of heavy nuclear recoils in the subsurface layer can cause detachments of material particles. Such a behaviour is similar to sputtering processes observed during a bombardment of the material surface by a flux of energetic particles. However, in the NFC material, the impacts are initiated from the bulk. In this work we propose a two-dimensional mesoscopic model to perform a stochastic simulation of the destruction processes occurring in a subsurface region of NFC material. We describe the erosion of the material surface, the evolution of its roughness and predict the detachment of the material particles. Size distributions of the emitted particles are obtained in this study. The simulation results of the model are in a qualitative agreement with the size histogram of particles produced from the material containing lava-like fuel formed during the Chernobyl nuclear power plant disaster.

  5. A heterogeneous stochastic FEM framework for elliptic PDEs

    International Nuclear Information System (INIS)

    Hou, Thomas Y.; Liu, Pengfei

    2015-01-01

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage

  6. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  7. Modeling and simulating the adaptive electrical properties of stochastic polymeric 3D networks

    International Nuclear Information System (INIS)

    Sigala, R; Smerieri, A; Camorani, P; Schüz, A; Erokhin, V

    2013-01-01

    Memristors are passive two-terminal circuit elements that combine resistance and memory. Although in theory memristors are a very promising approach to fabricate hardware with adaptive properties, there are only very few implementations able to show their basic properties. We recently developed stochastic polymeric matrices with a functionality that evidences the formation of self-assembled three-dimensional (3D) networks of memristors. We demonstrated that those networks show the typical hysteretic behavior observed in the ‘one input-one output’ memristive configuration. Interestingly, using different protocols to electrically stimulate the networks, we also observed that their adaptive properties are similar to those present in the nervous system. Here, we model and simulate the electrical properties of these self-assembled polymeric networks of memristors, the topology of which is defined stochastically. First, we show that the model recreates the hysteretic behavior observed in the real experiments. Second, we demonstrate that the networks modeled indeed have a 3D instead of a planar functionality. Finally, we show that the adaptive properties of the networks depend on their connectivity pattern. Our model was able to replicate fundamental qualitative behavior of the real organic 3D memristor networks; yet, through the simulations, we also explored other interesting properties, such as the relation between connectivity patterns and adaptive properties. Our model and simulations represent an interesting tool to understand the very complex behavior of self-assembled memristor networks, which can finally help to predict and formulate hypotheses for future experiments. (paper)

  8. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  9. Meta-stochastic simulation of biochemical models for systems and synthetic biology.

    Science.gov (United States)

    Sanassy, Daven; Widera, Paweł; Krasnogor, Natalio

    2015-01-16

    Stochastic simulation algorithms (SSAs) are used to trace realistic trajectories of biochemical systems at low species concentrations. As the complexity of modeled biosystems increases, it is important to select the best performing SSA. Numerous improvements to SSAs have been introduced but they each only tend to apply to a certain class of models. This makes it difficult for a systems or synthetic biologist to decide which algorithm to employ when confronted with a new model that requires simulation. In this paper, we demonstrate that it is possible to determine which algorithm is best suited to simulate a particular model and that this can be predicted a priori to algorithm execution. We present a Web based tool ssapredict that allows scientists to upload a biochemical model and obtain a prediction of the best performing SSA. Furthermore, ssapredict gives the user the option to download our high performance simulator ngss preconfigured to perform the simulation of the queried biochemical model with the predicted fastest algorithm as the simulation engine. The ssapredict Web application is available at http://ssapredict.ico2s.org. It is free software and its source code is distributed under the terms of the GNU Affero General Public License.

  10. H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.

    Science.gov (United States)

    Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Efficiency Loss of Mixed Equilibrium Associated with Altruistic Users and Logit-based Stochastic Users in Transportation Network

    Directory of Open Access Journals (Sweden)

    Xiao-Jun Yu

    2014-02-01

    Full Text Available The efficiency loss of mixed equilibrium associated with two categories of users is investigated in this paper. The first category of users are altruistic users (AU who have the same altruism coefficient and try to minimize their own perceived cost that assumed to be a linear combination of selfish com­ponent and altruistic component. The second category of us­ers are Logit-based stochastic users (LSU who choose the route according to the Logit-based stochastic user equilib­rium (SUE principle. The variational inequality (VI model is used to formulate the mixed route choice behaviours associ­ated with AU and LSU. The efficiency loss caused by the two categories of users is analytically derived and the relations to some network parameters are discussed. The numerical tests validate our analytical results. Our result takes the re­sults in the existing literature as its special cases.

  12. Efficient Galerkin solution of stochastic fractional differential equations using second kind Chebyshev wavelets

    Directory of Open Access Journals (Sweden)

    Fakhrodin Mohammadi

    2017-10-01

    Full Text Available ‎Stochastic fractional differential equations (SFDEs have been used for modeling many physical problems in the fields of turbulance‎, ‎heterogeneous‎, ‎flows and matrials‎, ‎viscoelasticity and electromagnetic theory‎. ‎In this paper‎, ‎an‎ efficient wavelet Galerkin method based on the second kind Chebyshev wavelets are proposed for approximate solution of SFDEs‎. ‎In ‎this ‎app‎roach‎‎, ‎o‎perational matrices of the second kind Chebyshev wavelets ‎are used ‎for reducing SFDEs to a linear system of algebraic equations that can be solved easily‎. ‎C‎onvergence and error analysis of the proposed method is ‎considered‎.‎ ‎Some numerical examples are performed to confirm the applicability and efficiency of the proposed method‎.

  13. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2016-01-06

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

  14. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  15. Seismic stochastic inversion identify river channel sand body

    Science.gov (United States)

    He, Z.

    2015-12-01

    The technology of seismic inversion is regarded as one of the most important part of geophysics. By using the technology of seismic inversion and the theory of stochastic simulation, the concept of seismic stochastic inversion is proposed.Seismic stochastic inversion can play an significant role in the identifying river channel sand body. Accurate sand body description is a crucial parameter to measure oilfield development and oilfield stimulation during the middle and later periods. Besides, rational well spacing density is an essential condition for efficient production. Based on the geological knowledge of a certain oilfield, in line with the use of seismic stochastic inversion, the river channel sand body in the work area is identified. In this paper, firstly, the single river channel body from the composite river channel body is subdivided. Secondly, the distribution of river channel body is ascertained in order to ascertain the direction of rivers. Morever, the superimposed relationship among the sand body is analyzed, especially among the inter-well sand body. The last but not at the least, via the analysis of inversion results of first vacuating the wells and continuous infilling later, it is meeted the most needs well spacing density that can obtain the optimal inversion result. It would serve effective guidance for oilfield stimulation.

  16. Stochastic processes and quantum theory

    International Nuclear Information System (INIS)

    Klauder, J.R.

    1975-01-01

    The author analyses a variety of stochastic processes, namely real time diffusion phenomena, which are analogues of imaginary time quantum theory and convariant imaginary time quantum field theory. He elaborates some standard properties involving probability measures and stochastic variables and considers a simple class of examples. Finally he develops the fact that certain stochastic theories actually exhibit divergences that simulate those of covariant quantum field theory and presents examples of both renormaizable and unrenormalizable behavior. (V.J.C.)

  17. A stochastic model for simulation of the economic consequences of bovine virus diarrhoea virus infection in a dairy herd

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten; Houe, H.

    1995-01-01

    A dynamic, stochastic model simulating the technical and economic consequences of bovine virus diarrhoea virus (BVDV) infections for a dairy cattle herd for use on a personal computer was developed. The production and state changes of the herd were simulated by state changes of the individual cows...... and heifers. All discrete events at the cow level were triggered stochastically. Each cow and heifer was characterized by state variables such as stage of lactation, parity, oestrous status, decision for culling, milk production potential, and immune status for BVDV. The model was controlled by 170 decision...... variables describing biologic and management variables including 21 decision variables describing the effect of BVDV infection on the production of the individual animal. Two markedly different scenarios were simulated to demonstrate the behaviour of the developed model and the potentials of the applied...

  18. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    Science.gov (United States)

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured

  19. Ep for efficient stochastic control with obstacles

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Kappen, H.J.

    2010-01-01

    Abstract. We address the problem of continuous stochastic optimal control in the presence of hard obstacles. Due to the non-smooth character of the obstacles, the traditional approach using dynamic programming in combination with function approximation tends to fail. We consider a recently

  20. Magnetic Tunnel Junction Based Long-Term Short-Term Stochastic Synapse for a Spiking Neural Network with On-Chip STDP Learning

    Science.gov (United States)

    Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik

    2016-07-01

    Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.

  1. Assessing the potential value for an automated dairy cattle body condition scoring system through stochastic simulation

    NARCIS (Netherlands)

    Bewley, J.M.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.

    2010-01-01

    Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm

  2. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    Science.gov (United States)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  3. Solving Langevin equation with the stochastic algebraically correlated noise

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Srokowski, T.

    1996-01-01

    Long time tail in the velocity and force autocorrelation function has been found recently in the molecular dynamics simulations of the peripheral collisions of ions. Simulation of those slowly decaying correlations in the stochastic transport theory requires the development of new methods of generating stochastic force of arbitrarily long correlation times. The Markovian process and the multidimensional Kangaroo process which permit describing various algebraic correlated stochastic processes are proposed. (author)

  4. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  5. Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)

  6. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    International Nuclear Information System (INIS)

    Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung

    2010-01-01

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  7. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  8. Product Costing in FMT: Comparing Deterministic and Stochastic Models Using Computer-Based Simulation for an Actual Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen

    2000-01-01

    This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....

  9. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  10. A Newton-Based Extremum Seeking MPPT Method for Photovoltaic Systems with Stochastic Perturbations

    Directory of Open Access Journals (Sweden)

    Heng Li

    2014-01-01

    Full Text Available Microcontroller based maximum power point tracking (MPPT has been the most popular MPPT approach in photovoltaic systems due to its high flexibility and efficiency in different photovoltaic systems. It is well known that PV systems typically operate under a range of uncertain environmental parameters and disturbances, which implies that MPPT controllers generally suffer from some unknown stochastic perturbations. To address this issue, a novel Newton-based stochastic extremum seeking MPPT method is proposed. Treating stochastic perturbations as excitation signals, the proposed MPPT controller has a good tolerance of stochastic perturbations in nature. Different from conventional gradient-based extremum seeking MPPT algorithm, the convergence rate of the proposed controller can be totally user-assignable rather than determined by unknown power map. The stability and convergence of the proposed controller are rigorously proved. We further discuss the effects of partial shading and PV module ageing on the proposed controller. Numerical simulations and experiments are conducted to show the effectiveness of the proposed MPPT algorithm.

  11. Stochastic optimization of loading pattern for PWR

    International Nuclear Information System (INIS)

    Smuc, T.; Pevec, D.

    1994-01-01

    The application of stochastic optimization methods in solving in-core fuel management problems is restrained by the need for a large number of proposed solutions loading patterns, if a high quality final solution is wanted. Proposed loading patterns have to be evaluated by core neutronics simulator, which can impose unrealistic computer time requirements. A new loading pattern optimization code Monte Carlo Loading Pattern Search has been developed by coupling the simulated annealing optimization algorithm with a fast one-and-a-half dimensional core depletion simulator. The structure of the optimization method provides more efficient performance and allows the user to empty precious experience in the search process, thus reducing the search space size. Hereinafter, we discuss the characteristics of the method and illustrate them on the results obtained by solving the PWR reload problem. (authors). 7 refs., 1 tab., 1 fig

  12. Stochastic simulations of normal aging and Werner's syndrome.

    KAUST Repository

    Qi, Qi

    2014-04-26

    Human cells typically consist of 23 pairs of chromosomes. Telomeres are repetitive sequences of DNA located at the ends of chromosomes. During cell replication, a number of basepairs are lost from the end of the chromosome and this shortening restricts the number of divisions that a cell can complete before it becomes senescent, or non-replicative. In this paper, we use Monte Carlo simulations to form a stochastic model of telomere shortening to investigate how telomere shortening affects normal aging. Using this model, we study various hypotheses for the way in which shortening occurs by comparing their impact on aging at the chromosome and cell levels. We consider different types of length-dependent loss and replication probabilities to describe these processes. After analyzing a simple model for a population of independent chromosomes, we simulate a population of cells in which each cell has 46 chromosomes and the shortest telomere governs the replicative potential of the cell. We generalize these simulations to Werner\\'s syndrome, a condition in which large sections of DNA are removed during cell division and, amongst other conditions, results in rapid aging. Since the mechanisms governing the loss of additional basepairs are not known, we use our model to simulate a variety of possible forms for the rate at which additional telomeres are lost per replication and several expressions for how the probability of cell division depends on telomere length. As well as the evolution of the mean telomere length, we consider the standard deviation and the shape of the distribution. We compare our results with a variety of data from the literature, covering both experimental data and previous models. We find good agreement for the evolution of telomere length when plotted against population doubling.

  13. Stochastic synaptic plasticity with memristor crossbar arrays

    KAUST Repository

    Naous, Rawan

    2016-11-01

    Memristive devices have been shown to exhibit slow and stochastic resistive switching behavior under low-voltage, low-current operating conditions. Here we explore such mechanisms to emulate stochastic plasticity in memristor crossbar synapse arrays. Interfaced with integrate-and-fire spiking neurons, the memristive synapse arrays are capable of implementing stochastic forms of spike-timing dependent plasticity which parallel mean-rate models of stochastic learning with binary synapses. We present theory and experiments with spike-based stochastic learning in memristor crossbar arrays, including simplified modeling as well as detailed physical simulation of memristor stochastic resistive switching characteristics due to voltage and current induced filament formation and collapse. © 2016 IEEE.

  14. Stochastic synaptic plasticity with memristor crossbar arrays

    KAUST Repository

    Naous, Rawan; Al-Shedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2016-01-01

    Memristive devices have been shown to exhibit slow and stochastic resistive switching behavior under low-voltage, low-current operating conditions. Here we explore such mechanisms to emulate stochastic plasticity in memristor crossbar synapse arrays. Interfaced with integrate-and-fire spiking neurons, the memristive synapse arrays are capable of implementing stochastic forms of spike-timing dependent plasticity which parallel mean-rate models of stochastic learning with binary synapses. We present theory and experiments with spike-based stochastic learning in memristor crossbar arrays, including simplified modeling as well as detailed physical simulation of memristor stochastic resistive switching characteristics due to voltage and current induced filament formation and collapse. © 2016 IEEE.

  15. Modeling and simulation of a controlled steam generator in the context of dynamic reliability using a Stochastic Hybrid Automaton

    International Nuclear Information System (INIS)

    Babykina, Génia; Brînzei, Nicolae; Aubry, Jean-François; Deleuze, Gilles

    2016-01-01

    The paper proposes a modeling framework to support Monte Carlo simulations of the behavior of a complex industrial system. The aim is to analyze the system dependability in the presence of random events, described by any type of probability distributions. Continuous dynamic evolutions of physical parameters are taken into account by a system of differential equations. Dynamic reliability is chosen as theoretical framework. Based on finite state automata theory, the formal model is built by parallel composition of elementary sub-models using a bottom-up approach. Considerations of a stochastic nature lead to a model called the Stochastic Hybrid Automaton. The Scilab/Scicos open source environment is used for implementation. The case study is carried out on an example of a steam generator of a nuclear power plant. The behavior of the system is studied by exploring its trajectories. Possible system trajectories are analyzed both empirically, using the results of Monte Carlo simulations, and analytically, using the formal system model. The obtained results are show to be relevant. The Stochastic Hybrid Automaton appears to be a suitable tool to address the dynamic reliability problem and to model real systems of high complexity; the bottom-up design provides precision and coherency of the system model. - Highlights: • A part of a nuclear power plant is modeled in the context of dynamic reliability. • Stochastic Hybrid Automaton is used as an input model for Monte Carlo simulations. • The model is formally built using a bottom-up approach. • The behavior of the system is analyzed empirically and analytically. • A formally built SHA shows to be a suitable tool to approach dynamic reliability.

  16. Searching for Stable SinCn Clusters: Combination of Stochastic Potential Surface Search and Pseudopotential Plane-Wave Car-Parinello Simulated Annealing Simulations

    Directory of Open Access Journals (Sweden)

    Larry W. Burggraf

    2013-07-01

    Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  17. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix and Polymer Matrix Composite Structures

    Science.gov (United States)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.

    2016-01-01

    Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.

  18. Transport in Stochastic Media

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.; Thieberger, R.

    1998-01-01

    Classical transport of neutral particles in a binary, scattering, stochastic media is discussed. It is assumed that the cross-sections of the constituent materials and their volume fractions are known. The inner structure of the media is stochastic, but there exist a statistical knowledge about the lump sizes, shapes and arrangement. The transmission through the composite media depends on the specific heterogeneous realization of the media. The current research focuses on the averaged transmission through an ensemble of realizations, frm which an effective cross-section for the media can be derived. The problem of one dimensional transport in stochastic media has been studied extensively [1]. In the one dimensional description of the problem, particles are transported along a line populated with alternating material segments of random lengths. The current work discusses transport in two-dimensional stochastic media. The phenomenon that is unique to the multi-dimensional description of the problem is obstacle bypassing. Obstacle bypassing tends to reduce the opacity of the media, thereby reducing its effective cross-section. The importance of this phenomenon depends on the manner in which the obstacles are arranged in the media. Results of transport simulations in multi-dimensional stochastic media are presented. Effective cross-sections derived from the simulations are compared against those obtained for the one-dimensional problem, and against those obtained from effective multi-dimensional models, which are partially based on a Markovian assumption

  19. Fast stochastic simulation of biochemical reaction systems by alternative formulations of the chemical Langevin equation

    KAUST Repository

    Mélykúti, Bence

    2010-01-01

    The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when simulating (bio)chemical kinetics. The CLE model is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. The contribution of this work is that we observe and explore that the CLE is not a single equation, but a parametric family of equations, all of which give the same finite-dimensional distribution of the variables. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix. On the practical side, we show that in the case where there are m1 pairs of reversible reactions and m2 irreversible reactions there is another, simple formulation of the CLE with only m1 + m2 Wiener processes, whereas the standard approach uses 2 m1 + m2. We demonstrate that there are considerable computational savings when using this latter formulation. Such transformations of the CLE do not cause a loss of accuracy and are therefore distinct from model reduction techniques. We illustrate our findings by considering alternative formulations of the CLE for a human ether a-go-go related gene ion channel model and the Goldbeter-Koshland switch. © 2010 American Institute of Physics.

  20. A SAS-based solution to evaluate study design efficiency of phase I pediatric oncology trials via discrete event simulation.

    Science.gov (United States)

    Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M

    2008-06-01

    Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.

  1. A delay fractioning approach to global synchronization of delayed complex networks with stochastic disturbances

    International Nuclear Information System (INIS)

    Wang Yao; Wang Zidong; Liang Jinling

    2008-01-01

    In this Letter, the synchronization problem is investigated for a class of stochastic complex networks with time delays. By utilizing a new Lyapunov functional form based on the idea of 'delay fractioning', we employ the stochastic analysis techniques and the properties of Kronecker product to establish delay-dependent synchronization criteria that guarantee the globally asymptotically mean-square synchronization of the addressed delayed networks with stochastic disturbances. These sufficient conditions, which are formulated in terms of linear matrix inequalities (LMIs), can be solved efficiently by the LMI toolbox in Matlab. The main results are proved to be much less conservative and the conservatism could be reduced further as the number of delay fractioning gets bigger. A simulation example is exploited to demonstrate the advantage and applicability of the proposed result

  2. The Detection of Subsynchronous Oscillation in HVDC Based on the Stochastic Subspace Identification Method

    Directory of Open Access Journals (Sweden)

    Chen Shi

    2014-01-01

    Full Text Available Subsynchronous oscillation (SSO usually caused by series compensation, power system stabilizer (PSS, high voltage direct current transmission (HVDC and other power electronic equipment, which will affect the safe operation of generator shafting even the system. It is very important to identify the modal parameters of SSO to take effective control strategies as well. Since the identification accuracy of traditional methods are not high enough, the stochastic subspace identification (SSI method is proposed to improve the identification accuracy of subsynchronous oscillation modal. The stochastic subspace identification method was compared with the other two methods on subsynchronous oscillation IEEE benchmark model and Xiang-Shang HVDC system model, the simulation results show that the stochastic subspace identification method has the advantages of high identification precision, high operation efficiency and strong ability of anti-noise.

  3. Quasi-continuous stochastic simulation framework for flood modelling

    Science.gov (United States)

    Moustakis, Yiannis; Kossieris, Panagiotis; Tsoukalas, Ioannis; Efstratiadis, Andreas

    2017-04-01

    Typically, flood modelling in the context of everyday engineering practices is addressed through event-based deterministic tools, e.g., the well-known SCS-CN method. A major shortcoming of such approaches is the ignorance of uncertainty, which is associated with the variability of soil moisture conditions and the variability of rainfall during the storm event.In event-based modeling, the sole expression of uncertainty is the return period of the design storm, which is assumed to represent the acceptable risk of all output quantities (flood volume, peak discharge, etc.). On the other hand, the varying antecedent soil moisture conditions across the basin are represented by means of scenarios (e.g., the three AMC types by SCS),while the temporal distribution of rainfall is represented through standard deterministic patterns (e.g., the alternative blocks method). In order to address these major inconsistencies,simultaneously preserving the simplicity and parsimony of the SCS-CN method, we have developed a quasi-continuous stochastic simulation approach, comprising the following steps: (1) generation of synthetic daily rainfall time series; (2) update of potential maximum soil moisture retention, on the basis of accumulated five-day rainfall; (3) estimation of daily runoff through the SCS-CN formula, using as inputs the daily rainfall and the updated value of soil moisture retention;(4) selection of extreme events and application of the standard SCS-CN procedure for each specific event, on the basis of synthetic rainfall.This scheme requires the use of two stochastic modelling components, namely the CastaliaR model, for the generation of synthetic daily data, and the HyetosMinute model, for the disaggregation of daily rainfall to finer temporal scales. Outcomes of this approach are a large number of synthetic flood events, allowing for expressing the design variables in statistical terms and thus properly evaluating the flood risk.

  4. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  5. Measuring Efficiency of Health Systems of the Middle East and North Africa (MENA) Region Using Stochastic Frontier Analysis.

    Science.gov (United States)

    Hamidi, Samer; Akinci, Fevzi

    2016-06-01

    The main purpose of this study is to measure the technical efficiency of twenty health systems in the Middle East and North Africa (MENA) region to inform evidence-based health policy decisions. In addition, the effects of alternative stochastic frontier model specification on the empirical results are examined. We conducted a stochastic frontier analysis to estimate the country-level technical efficiencies using secondary panel data for 20 MENA countries for the period of 1995-2012 from the World Bank database. We also tested the effect of alternative frontier model specification using three random-effects approaches: a time-invariant model where efficiency effects are assumed to be static with regard to time, and a time-varying efficiency model where efficiency effects have temporal variation, and one model to account for heterogeneity. The average estimated technical inefficiency of health systems in the MENA region was 6.9 % with a range of 5.7-7.9 % across the three models. Among the top performers, Lebanon, Qatar, and Morocco are ranked consistently high according to the three different inefficiency model specifications. On the opposite side, Sudan, Yemen and Djibouti ranked among the worst performers. On average, the two most technically efficient countries were Qatar and Lebanon. We found that the estimated technical efficiency scores vary substantially across alternative parametric models. Based on the findings reported in this study, most MENA countries appear to be operating, on average, with a reasonably high degree of technical efficiency compared with other countries in the region. However, there is evidence to suggest that there are considerable efficiency gains yet to be made by some MENA countries. Additional empirical research is needed to inform future health policies aimed at improving both the efficiency and sustainability of the health systems in the MENA region.

  6. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  7. Displacement rate and temperature equivalence in stochastic cluster dynamics simulations of irradiated pure α-Fe

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Aaron [Sandia National Laboratories, Albuquerque, 87185 NM (United States); George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, 30332 GA (United States); Muntifering, Brittany [Sandia National Laboratories, Albuquerque, 87185 NM (United States); Northwestern University, Chicago, 60208 IL (United States); Dingreville, Rémi; Hattar, Khalid [Sandia National Laboratories, Albuquerque, 87185 NM (United States); Capolungo, Laurent, E-mail: laurent@lanl.gov [George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, 30332 GA (United States); Material Science and Technology Division, MST-8, Los Alamos National Laboratory, Los Alamos, 87545 NM (United States)

    2016-11-15

    Charged particle irradiation is a frequently used experimental tool to study damage accumulation in metals expected during neutron irradiation. Understanding the correspondence between displacement rate and temperature during such studies is one of several factors that must be taken into account in order to design experiments that produce equivalent damage accumulation to neutron damage conditions. In this study, spatially resolved stochastic cluster dynamics (SRSCD) is used to simulate damage evolution in α-Fe and find displacement rate/temperature pairs under ‘target’ and ‘proxy’ conditions for which the local distribution of vacancies and vacancy clusters is the same as a function of displacement damage. The SRSCD methodology is chosen for this study due to its computational efficiency and ability to simulate damage accumulation in spatially inhomogeneous materials such as thin films. Results are presented for Frenkel pair irradiation and displacement cascade damage in thin films and bulk α-Fe. Holding all other material and irradiation conditions constant, temperature adjustments are shown to successfully make up for changes in displacement rate such that defect concentrations and cluster sizes remain relatively constant. The methodology presented in this study allows for a first-order prediction of the temperature at which ion irradiation experiments (‘proxy’ conditions) should take place in order to approximate neutron irradiation (‘target’ conditions).

  8. FluTE, a publicly available stochastic influenza epidemic simulation model.

    Directory of Open Access Journals (Sweden)

    Dennis L Chao

    2010-01-01

    Full Text Available Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2 and 2009 pandemic A(H1N1 influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.

  9. FluTE, a publicly available stochastic influenza epidemic simulation model.

    Science.gov (United States)

    Chao, Dennis L; Halloran, M Elizabeth; Obenchain, Valerie J; Longini, Ira M

    2010-01-29

    Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.

  10. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2014-01-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model

  11. Bacmeta: simulator for genomic evolution in bacterial metapopulations.

    Science.gov (United States)

    Sipola, Aleksi; Marttinen, Pekka; Corander, Jukka

    2018-02-20

    The advent of genomic data from densely sampled bacterial populations has created a need for flexible simulators by which models and hypotheses can be efficiently investigated in the light of empirical observations. Bacmeta provides fast stochastic simulation of neutral evolution within a large collection of interconnected bacterial populations with completely adjustable connectivity network. Stochastic events of mutations, recombinations, insertions/deletions, migrations and microepidemics can be simulated in discrete non-overlapping generations with a Wright-Fisher model that operates on explicit sequence data of any desired genome length. Each model component, including locus, bacterial strain, population, and ultimately the whole metapopulation, is efficiently simulated using C ++ objects, and detailed metadata from each level can be acquired. The software can be executed in a cluster environment using simple textual input files, enabling, e.g., large-scale simulations and likelihood-free inference. Bacmeta is implemented with C ++ for Linux, Mac and Windows. It is available at https://bitbucket.org/aleksisipola/bacmeta under the BSD 3-clause license. aleksi.sipola@helsinki.fi, jukka.corander@medisin.uio.no. Supplementary data are available at Bioinformatics online.

  12. Global impulsive exponential synchronization of stochastic perturbed chaotic delayed neural networks

    International Nuclear Information System (INIS)

    Hua-Guang, Zhang; Tie-Dong, Ma; Jie, Fu; Shao-Cheng, Tong

    2009-01-01

    In this paper, the global impulsive exponential synchronization problem of a class of chaotic delayed neural networks (DNNs) with stochastic perturbation is studied. Based on the Lyapunov stability theory, stochastic analysis approach and an efficient impulsive delay differential inequality, some new exponential synchronization criteria expressed in the form of the linear matrix inequality (LMI) are derived. The designed impulsive controller not only can globally exponentially stabilize the error dynamics in mean square, but also can control the exponential synchronization rate. Furthermore, to estimate the stable region of the synchronization error dynamics, a novel optimization control algorithm is proposed, which can deal with the minimum problem with two nonlinear terms coexisting in LMIs effectively. Simulation results finally demonstrate the effectiveness of the proposed method

  13. Rare event simulation for stochastic fixed point equations related to the smoothing transform

    DEFF Research Database (Denmark)

    Collamore, Jeffrey F.; Vidyashankar, Anand N.; Xu, Jie

    2013-01-01

    In several applications arising in computer science, cascade theory, and other applied areas, it is of interest to evaluate the tail probabilities of non-homogeneous stochastic fixed point equations. Recently, techniques have been developed for the related linear recursions, yielding tail estimates...... and importance sampling methods for these recursions. However, such methods do not routinely generalize to non-homogeneous recursions. Drawing on techniques from the weighted branching process literature, we present a consistent, strongly efficient importance sampling algorithm for estimating the tail...

  14. Evaluating Economic Alternatives for Wood Energy Supply Based on Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Ulises Flores Hernández

    2018-04-01

    Full Text Available Productive forests, as a major source of biomass, represent an important pre-requisite for the development of a bio-economy. In this respect, assessments of biomass availability, efficiency of forest management, forest operations, and economic feasibility are essential. This is certainly the case for Mexico, a country with an increasing energy demand and a considerable potential for sustainable forest utilization. Hence, this paper focuses on analyzing economic alternatives for the Mexican bioenergy supply based on the costs and revenues of utilizing woody biomass residues. With a regional spatial approach, harvesting and transportation costs of utilizing selected biomass residues were stochastically calculated using Monte Carlo simulations. A sensitivity analysis of percentage variation of the most probable estimate in relation to the parameters price and cost for one alternative using net future analysis was conducted. Based on the results for the northern region, a 10% reduction of the transportation cost would reduce overall supply cost, resulting in a total revenue of 13.69 USD/m3 and 0.75 USD/m3 for harvesting residues and non-extracted stand residues, respectively. For the central south region, it is estimated that a contribution of 16.53 USD/m3 from 2013 and a total revenue of 33.00 USD/m3 in 2030 from sawmill residues will improve the value chain. The given approach and outputs provide the basis for the decision-making process regarding forest utilization towards energy generation based on economic indicators.

  15. Symbolic Computing in Probabilistic and Stochastic Analysis

    Directory of Open Access Journals (Sweden)

    Kamiński Marcin

    2015-12-01

    Full Text Available The main aim is to present recent developments in applications of symbolic computing in probabilistic and stochastic analysis, and this is done using the example of the well-known MAPLE system. The key theoretical methods discussed are (i analytical derivations, (ii the classical Monte-Carlo simulation approach, (iii the stochastic perturbation technique, as well as (iv some semi-analytical approaches. It is demonstrated in particular how to engage the basic symbolic tools implemented in any system to derive the basic equations for the stochastic perturbation technique and how to make an efficient implementation of the semi-analytical methods using an automatic differentiation and integration provided by the computer algebra program itself. The second important illustration is probabilistic extension of the finite element and finite difference methods coded in MAPLE, showing how to solve boundary value problems with random parameters in the environment of symbolic computing. The response function method belongs to the third group, where interference of classical deterministic software with the non-linear fitting numerical techniques available in various symbolic environments is displayed. We recover in this context the probabilistic structural response in engineering systems and show how to solve partial differential equations including Gaussian randomness in their coefficients.

  16. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  17. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  18. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    Science.gov (United States)

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  19. Managerial performance and cost efficiency of Japanese local public hospitals: a latent class stochastic frontier model.

    Science.gov (United States)

    Besstremyannaya, Galina

    2011-09-01

    The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.

  20. On the use of reverse Brownian motion to accelerate hybrid simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu

    2017-04-01

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategies for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.

  1. Quantization of dynamical systems and stochastic control theory

    International Nuclear Information System (INIS)

    Guerra, F.; Morato, L.M.

    1982-09-01

    In the general framework of stochastic control theory we introduce a suitable form of stochastic action associated to the controlled process. Then a variational principle gives all main features of Nelson's stochastic mechanics. In particular we derive the expression of the current velocity field as the gradient of the phase action. Moreover the stochastic corrections to the Hamilton-Jacobi equation are in agreement with the quantum mechanical form of the Madelung fluid (equivalent to the Schroedinger equation). Therefore stochastic control theory can provide a very simple model simulating quantum mechanical behavior

  2. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  3. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    Science.gov (United States)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  4. Project Evaluation and Cash Flow Forecasting by Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1983-10-01

    Full Text Available The net present value of a discounted cash flow is used to evaluate projects. It is shown that the LaPlace transform of the cash flow time function is particularly useful when the cash flow profiles may be approximately described by ordinary linear differential equations in time. However, real cash flows are stochastic variables due to the stochastic nature of the disturbances during production.

  5. FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites

    Science.gov (United States)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna

    2016-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  6. Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations

    KAUST Repository

    Haji-Ali, Abdul Lateef

    2016-05-22

    of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).

  7. The Relative Efficiencies of Research Universities of Science and Technology in China: Based on the Data Envelopment Analysis and Stochastic Frontier Analysis

    Science.gov (United States)

    Chuanyi, Wang; Xiaohong, Lv; Shikui, Zhao

    2016-01-01

    This paper applies data envelopment analysis (DEA) and stochastic frontier analysis (SFA) to explore the relative efficiency of China's research universities of science and technology. According to the finding, when talent training is the only output, the efficiency of research universities of science and technology is far lower than that of…

  8. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  9. Stochastic Rotation Dynamics simulations of wetting multi-phase flows

    Science.gov (United States)

    Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin

    2016-06-01

    Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.

  10. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    International Nuclear Information System (INIS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  11. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Energy Technology Data Exchange (ETDEWEB)

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  12. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    Energy Technology Data Exchange (ETDEWEB)

    Zabaras, Nicolas J. [Cornell Univ., Ithaca, NY (United States)

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  13. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  14. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  15. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  16. A stochastic six-degree-of-freedom flight simulator for passively controlled high power rockets

    OpenAIRE

    Box, Simon; Bishop, Christopher M.; Hunt, Hugh

    2011-01-01

    This paper presents a method for simulating the flight of a passively controlled rocket in six degrees of freedom, and the descent under parachute in three degrees of freedom, Also presented is a method for modelling the uncertainty in both the rocket dynamics and the atmospheric conditions using stochastic parameters and the Monte-Carlo method. Included within this we present a method for quantifying the uncertainty in the atmospheric conditions using historical atmospheric data. The core si...

  17. American option pricing with stochastic volatility processes

    Directory of Open Access Journals (Sweden)

    Ping LI

    2017-12-01

    Full Text Available In order to solve the problem of option pricing more perfectly, the option pricing problem with Heston stochastic volatility model is considered. The optimal implementation boundary of American option and the conditions for its early execution are analyzed and discussed. In view of the fact that there is no analytical American option pricing formula, through the space discretization parameters, the stochastic partial differential equation satisfied by American options with Heston stochastic volatility is transformed into the corresponding differential equations, and then using high order compact finite difference method, numerical solutions are obtained for the option price. The numerical experiments are carried out to verify the theoretical results and simulation. The two kinds of optimal exercise boundaries under the conditions of the constant volatility and the stochastic volatility are compared, and the results show that the optimal exercise boundary also has stochastic volatility. Under the setting of parameters, the behavior and the nature of volatility are analyzed, the volatility curve is simulated, the calculation results of high order compact difference method are compared, and the numerical option solution is obtained, so that the method is verified. The research result provides reference for solving the problems of option pricing under stochastic volatility such as multiple underlying asset option pricing and barrier option pricing.

  18. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  19. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  20. STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L. PASTURES IN MARIN, N. L., MEXICO

    Directory of Open Access Journals (Sweden)

    José Romualdo Martínez-López

    2014-04-01

    Full Text Available A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L. and its dry matter intake by cattle, in Marín, NL, México. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of dry matter of green grass, as well as its conversion to senescence and dry grass and eventually to mulch, depending on precipitation and temperature. Model also simulates consumption of green and dry grass for cattle. The stocking rate used in the model simulation was 2 hectares per animal unit. Annual production ranged from 4.5 to 10.2 t of dry matter per hectare with annual rainfall of 300 to 704 mm, respectively. Total annual intake required per animal unit was estimated at 3.6 ton. Simulated net primary production coincides with reports in the literature, so the model was evaluated successfully.

  1. Lifetime distribution in thermal fatigue - a stochastic geometry approach

    International Nuclear Information System (INIS)

    Kullig, E.; Michel, B.

    1996-02-01

    The present report describes the interpretation approach for crack patterns which are generated on the smooth surface of austenitic specimens under thermal fatigue loading. A framework for the fracture mechanics characterization of equibiaxially loaded branched surface cracks is developed which accounts also for crack interaction effects. Advanced methods for the statistical evaluation of crack patterns using suitable characteristic quantities are developed. An efficient simulation procedure allows to identify the impact of different variables of the stochastic crack growth model with respect to the generated crack patterns. (orig.) [de

  2. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  3. Searching for stable Si(n)C(n) clusters: combination of stochastic potential surface search and pseudopotential plane-wave Car-Parinello simulated annealing simulations.

    Science.gov (United States)

    Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu

    2013-07-22

    To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  4. The Separatrix Algorithm for synthesis and analysis of stochastic simulations with applications in disease modeling.

    Directory of Open Access Journals (Sweden)

    Daniel J Klein

    Full Text Available Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text], stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%. Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.

  5. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    International Nuclear Information System (INIS)

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas

    2014-01-01

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation

  6. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    Energy Technology Data Exchange (ETDEWEB)

    Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)

    2014-12-10

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.

  7. Final Report: Improved Site Characterization And Storage Prediction Through Stochastic Inversion Of Time-Lapse Geophysical And Geochemical Data

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez, A; Mcnab, W; Hao, Y; White, D; Johnson, J

    2011-04-14

    During the last months of this project, our project activities have concentrated on four areas: (1) performing a stochastic inversion of pattern 16 seismic data to deduce reservoir bulk/shear moduli and density; the need for this inversion was not anticipated in the original scope of work, (2) performing a stochastic inversion of pattern 16 seismic data to deduce reservoir porosity and permeability, (3) complete the software needed to perform geochemical inversions and (4) use the software to perform stochastic inversion of aqueous chemistry data to deduce mineral volume fractions. This report builds on work described in progress reports previously submitted (Ramirez et al., 2009, 2010, 2011 - reports fulfilled the requirements of deliverables D1-D4) and fulfills deliverable D5: Field-based single-pattern simulations work product. The main challenge with our stochastic inversion approach is its large computational expense, even for single reservoir patterns. We dedicated a significant level of effort to improve computational efficiency but inversions involving multiple patterns were still intractable by project's end. As a result, we were unable to fulfill Deliverable D6: Field-based multi-pattern simulations work product.

  8. Introduction to stochastic analysis integrals and differential equations

    CERN Document Server

    Mackevicius, Vigirdas

    2013-01-01

    This is an introduction to stochastic integration and stochastic differential equations written in an understandable way for a wide audience, from students of mathematics to practitioners in biology, chemistry, physics, and finances. The presentation is based on the naïve stochastic integration, rather than on abstract theories of measure and stochastic processes. The proofs are rather simple for practitioners and, at the same time, rather rigorous for mathematicians. Detailed application examples in natural sciences and finance are presented. Much attention is paid to simulation diffusion pro

  9. An efficient forward–reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Bayer, Christian

    2016-02-20

    © 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  10. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    Science.gov (United States)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  11. Simulations of DSB Yields and Radiation-induced Chromosomal Aberrations in Human Cells Based on the Stochastic Track Structure Induced by HZE Particles

    Science.gov (United States)

    Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu

    2014-01-01

    The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.

  12. Realistic and efficient 2D crack simulation

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  13. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  14. A stochastic frontier analysis of technical efficiency of fish cage culture in Peninsular Malaysia.

    Science.gov (United States)

    Islam, Gazi Md Nurul; Tai, Shzee Yew; Kusairi, Mohd Noh

    2016-01-01

    Cage culture plays an important role in achieving higher output and generating more export earnings in Malaysia. However, the cost of fingerlings, feed and labour have increased substantially for cage culture in the coastal areas in Peninsular Malaysia. This paper uses farm level data gathered from Manjung, Perak and Kota Tinggi, Johor to investigate the technical efficiency of brackish water fish cage culture using the stochastic frontier approach. The technical efficiency was estimated and specifically the factors affecting technical inefficiencies of fish cage culture system in Malaysia was investigated. On average, 37 percent of the sampled fish cage farms are technically efficient. The results suggest very high degrees of technical inefficiency exist among the cage culturists. This implies that great potential exists to increase fish production through improved efficiency in cage culture management in Peninsular Malaysia. The results indicate that farmers obtained grouper fingerlings from other neighboring countries due to scarcity of fingerlings from wild sources. The cost of feeding for grouper (Epinephelus fuscoguttatus) requires relatively higher costs compared to seabass (Lates calcarifer) production in cage farms in the study areas. Initiatives to undertake extension programmes at the farm level are needed to help cage culturists in utilizing their resources more efficiently in order to substantially enhance their fish production.

  15. Stochastic self-propagating star formation in three-dimensional disk galaxy simulations

    International Nuclear Information System (INIS)

    Statler, T.; Comins, N.; Smith, B.F.

    1983-01-01

    Stochastic self-propagating star formation (SSPSF) is a process of forming new stars through the compression of the interstellar medium by supernova shock waves. Coupling this activity with galactic differential rotation produces spiral structure in two-dimensional disk galaxy simulations. In this paper the first results of a three-dimensional SSPSF simulation of disk galaxies are reported. Our model generates less impressive spirals than do the two-dimensional simulations. Although some spirals do appear in equilibrium, more frequently we observe spirals as non-equilibrium states of the models: as the spiral arms evolve, they widen until the spiral structure is no longer discernible. The two free parameters that we vary in this study are the probability of star formation due to a recent, nearby explosion, and the relaxation time for the interstellar medium to return to a condition of maximum star formation after it has been cleared out by an explosion and subsequent star formation. We find that equilibrium spiral structure is formed over a much smaller range of these parameters in our three-dimensional SSPSF models than in similar two-dimensional models. We discuss possible reasons for these results as well as improvements on the model which are being explored

  16. A conditional stochastic weather generator for seasonal to multi-decadal simulations

    Science.gov (United States)

    Verdin, Andrew; Rajagopalan, Balaji; Kleiber, William; Podestá, Guillermo; Bert, Federico

    2018-01-01

    We present the application of a parametric stochastic weather generator within a nonstationary context, enabling simulations of weather sequences conditioned on interannual and multi-decadal trends. The generalized linear model framework of the weather generator allows any number of covariates to be included, such as large-scale climate indices, local climate information, seasonal precipitation and temperature, among others. Here we focus on the Salado A basin of the Argentine Pampas as a case study, but the methodology is portable to any region. We include domain-averaged (e.g., areal) seasonal total precipitation and mean maximum and minimum temperatures as covariates for conditional simulation. Areal covariates are motivated by a principal component analysis that indicates the seasonal spatial average is the dominant mode of variability across the domain. We find this modification to be effective in capturing the nonstationarity prevalent in interseasonal precipitation and temperature data. We further illustrate the ability of this weather generator to act as a spatiotemporal downscaler of seasonal forecasts and multidecadal projections, both of which are generally of coarse resolution.

  17. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    International Nuclear Information System (INIS)

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-01-01

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  18. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  19. Stochastic reaction-diffusion algorithms for macromolecular crowding

    Science.gov (United States)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  20. Geometric integrators for stochastic rigid body dynamics

    KAUST Repository

    Tretyakov, Mikhail

    2016-01-05

    Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.

  1. Geometric integrators for stochastic rigid body dynamics

    KAUST Repository

    Tretyakov, Mikhail

    2016-01-01

    Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.

  2. Verification of HYDRASTAR - A code for stochastic continuum simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1991-07-01

    HYDRASTAR is a code developed at Starprog AB for use in the SKB 91 performance assessment project with the following principal function: - Reads the actual conductivity measurements from a file created from the data base GEOTAB. - Regularizes the measurements to a user chosen calculation scale. - Generates three dimensional unconditional realizations of the conductivity field by using a supplied model of the conductivity field as a stochastic function. - Conditions the simulated conductivity field on the actual regularized measurements. - Reads the boundary conditions from a regional deterministic NAMMU computation. - Calculates the hydraulic head field, Darcy velocity field, stream lines and water travel times by solving the stationary hydrology equation and the streamline equation obtained with the velocities calculated from Darcy's law. - Generates visualizations of the realizations if desired. - Calculates statistics such as semivariograms and expectation values of the output fields by repeating the above procedure by iterations of the Monte Carlo type. When using computer codes for safety assessment purpose validation and verification of the codes are important. Thus this report describes a work performed with the goal of verifying parts of HYDRASTAR. The verification described in this report uses comparisons with two other solutions of related examples: A. Comparison with a so called perturbation solution of the stochastical stationary hydrology equation. This as an analytical approximation of the stochastical stationary hydrology equation valid in the case of small variability of the unconditional random conductivity field. B. Comparison with the (Hydrocoin, 1988), case 2. This is a classical example of a hydrology problem with a deterministic conductivity field. The principal feature of the problem is the presence of narrow fracture zones with high conductivity. the compared output are the hydraulic head field and a number of stream lines originating from a

  3. Effect of monthly areal rainfall uncertainty on streamflow simulation

    Science.gov (United States)

    Ndiritu, J. G.; Mkhize, N.

    2017-08-01

    Areal rainfall is mostly obtained from point rainfall measurements that are sparsely located and several studies have shown that this results in large areal rainfall uncertainties at the daily time step. However, water resources assessment is often carried out a monthly time step and streamflow simulation is usually an essential component of this assessment. This study set out to quantify monthly areal rainfall uncertainties and assess their effect on streamflow simulation. This was achieved by; i) quantifying areal rainfall uncertainties and using these to generate stochastic monthly areal rainfalls, and ii) finding out how the quality of monthly streamflow simulation and streamflow variability change if stochastic areal rainfalls are used instead of historic areal rainfalls. Tests on monthly rainfall uncertainty were carried out using data from two South African catchments while streamflow simulation was confined to one of them. A non-parametric model that had been applied at a daily time step was used for stochastic areal rainfall generation and the Pitman catchment model calibrated using the SCE-UA optimizer was used for streamflow simulation. 100 randomly-initialised calibration-validation runs using 100 stochastic areal rainfalls were compared with 100 runs obtained using the single historic areal rainfall series. By using 4 rain gauges alternately to obtain areal rainfall, the resulting differences in areal rainfall averaged to 20% of the mean monthly areal rainfall and rainfall uncertainty was therefore highly significant. Pitman model simulations obtained coefficient of efficiencies averaging 0.66 and 0.64 in calibration and validation using historic rainfalls while the respective values using stochastic areal rainfalls were 0.59 and 0.57. Average bias was less than 5% in all cases. The streamflow ranges using historic rainfalls averaged to 29% of the mean naturalised flow in calibration and validation and the respective average ranges using stochastic

  4. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    Science.gov (United States)

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  5. Study on the mechanism and efficiency of simulated annealing using an LP optimization benchmark problem - 113

    International Nuclear Information System (INIS)

    Qianqian, Li; Xiaofeng, Jiang; Shaohong, Zhang

    2010-01-01

    Simulated Annealing Algorithm (SAA) for solving combinatorial optimization problems is a popular method for loading pattern optimization. The main purpose of this paper is to understand the underlying search mechanism of SAA and to study its efficiency. In this study, a general SAA that employs random pair exchange of fuel assemblies to search for the optimum fuel Loading Pattern (LP) is applied to an exhaustively searched LP optimization benchmark problem. All the possible LPs of the benchmark problem have been enumerated and evaluated via the use of the very fast and accurate Hybrid Harmonics and Linear Perturbation (HHLP) method, such that the mechanism of SA for LP optimization can be explicitly analyzed and its search efficiency evaluated. The generic core geometry itself dictates that only a small number LPs can be generated by performing random single pair exchanges and that the LPs are necessarily mostly similar to the initial LP. This phase space effect turns out to be the basic mechanism in SAA that can explain its efficiency and good local search ability. A measure of search efficiency is introduced which shows that the stochastic nature of SAA greatly influences the variability of its search efficiency. It is also found that using fuel assembly k-infinity distribution as a technique to filter the LPs can significantly enhance the SAA search efficiency. (authors)

  6. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Nataša Krejić

    2014-12-01

    Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

  7. Cost and technical efficiency of physician practices: a stochastic frontier approach using panel data.

    Science.gov (United States)

    Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas

    2014-06-01

    This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.

  8. Simulations of DSB Yields and Radiation-induced Chromosomal Aberrations in Human Cells Based on the Stochastic Track Structure iIduced by HZE Particles

    Science.gov (United States)

    Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu

    2014-01-01

    The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.

  9. A cavitation model based on Eulerian stochastic fields

    Science.gov (United States)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  10. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  11. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  12. Stochastic estimation of electricity consumption

    International Nuclear Information System (INIS)

    Kapetanovic, I.; Konjic, T.; Zahirovic, Z.

    1999-01-01

    Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)

  13. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  14. Index Option Pricing Models with Stochastic Volatility and Stochastic Interest Rates

    NARCIS (Netherlands)

    Jiang, G.J.; van der Sluis, P.J.

    2000-01-01

    This paper specifies a multivariate stochastic volatility (SV) model for the S&P500 index and spot interest rate processes. We first estimate the multivariate SV model via the efficient method of moments (EMM) technique based on observations of underlying state variables, and then investigate the

  15. Real option valuation of power transmission investments by stochastic simulation

    International Nuclear Information System (INIS)

    Pringles, Rolando; Olsina, Fernando; Garcés, Francisco

    2015-01-01

    Network expansions in power markets usually lead to investment decisions subject to substantial irreversibility and uncertainty. Hence, investors need valuing the flexibility to change decisions as uncertainty unfolds progressively. Real option analysis is an advanced valuation technique that enables planners to take advantage of market opportunities while preventing or mitigating losses if future conditions evolve unfavorably. In the past, many approaches for valuing real options have been developed. However, applying these methods to value transmission projects is often inappropriate as revenue cash flows are path-dependent and affected by a myriad of uncertain variables. In this work, a valuation technique based on stochastic simulation and recursive dynamic programming, called Least-Square Monte Carlo, is applied to properly value the deferral option in a transmission investment. The effect of option's maturity, the initial outlay and the capital cost upon the value of the postponement option is investigated. Finally, sensitivity analysis determines optimal decision regions to execute, postpone or reject the investment projects. - Highlights: • A modern investment appraisal method is applied to value power transmission projects. • The value of the option to postpone decision to invest in transmission projects is assessed. • Simulation methods are best suited for valuing real options in transmission investments

  16. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  17. Dynamics of non-holonomic systems with stochastic transport

    Science.gov (United States)

    Holm, D. D.; Putkaradze, V.

    2018-01-01

    This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.

  18. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  19. Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm

    Directory of Open Access Journals (Sweden)

    Zhengyu Duan

    2015-11-01

    Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.

  20. Stochastic development regression using method of moments

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....

  1. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun

    2014-12-15

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player\\'s actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  2. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun; Shamma, Jeff S.

    2014-01-01

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player's actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  3. Monte Carlo molecular simulations: improving the statistical efficiency of samples with the help of artificial evolution algorithms; Simulations moleculaires de Monte Carlo: amelioration de l'efficacite statistique de l'echantillonnage grace aux algorithmes d'evolution artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, B.

    2002-03-01

    Molecular simulation aims at simulating particles in interaction, describing a physico-chemical system. When considering Markov Chain Monte Carlo sampling in this context, we often meet the same problem of statistical efficiency as with Molecular Dynamics for the simulation of complex molecules (polymers for example). The search for a correct sampling of the space of possible configurations with respect to the Boltzmann-Gibbs distribution is directly related to the statistical efficiency of such algorithms (i.e. the ability of rapidly providing uncorrelated states covering all the configuration space). We investigated how to improve this efficiency with the help of Artificial Evolution (AE). AE algorithms form a class of stochastic optimization algorithms inspired by Darwinian evolution. Efficiency measures that can be turned into efficiency criteria have been first searched before identifying parameters that could be optimized. Relative frequencies for each type of Monte Carlo moves, usually empirically chosen in reasonable ranges, were first considered. We combined parallel simulations with a 'genetic server' in order to dynamically improve the quality of the sampling during the simulations progress. Our results shows that in comparison with some reference settings, it is possible to improve the quality of samples with respect to the chosen criterion. The same algorithm has been applied to improve the Parallel Tempering technique, in order to optimize in the same time the relative frequencies of Monte Carlo moves and the relative frequencies of swapping between sub-systems simulated at different temperatures. Finally, hints for further research in order to optimize the choice of additional temperatures are given. (author)

  4. Stochastic structure of annual discharges of large European rivers

    Directory of Open Access Journals (Sweden)

    Stojković Milan

    2015-03-01

    Full Text Available Water resource has become a guarantee for sustainable development on both local and global scales. Exploiting water resources involves development of hydrological models for water management planning. In this paper we present a new stochastic model for generation of mean annul flows. The model is based on historical characteristics of time series of annual flows and consists of the trend component, long-term periodic component and stochastic component. The rest of specified components are model errors which are represented as a random time series. The random time series is generated by the single bootstrap model (SBM. Stochastic ensemble of error terms at the single hydrological station is formed using the SBM method. The ultimate stochastic model gives solutions of annual flows and presents a useful tool for integrated river basin planning and water management studies. The model is applied for ten large European rivers with long observed period. Validation of model results suggests that the stochastic flows simulated by the model can be used for hydrological simulations in river basins.

  5. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  6. Stochastic synchronization of coupled neural networks with intermittent control

    International Nuclear Information System (INIS)

    Yang Xinsong; Cao Jinde

    2009-01-01

    In this Letter, we study the exponential stochastic synchronization problem for coupled neural networks with stochastic noise perturbations. Based on Lyapunov stability theory, inequality techniques, the properties of Weiner process, and adding different intermittent controllers, several sufficient conditions are obtained to ensure exponential stochastic synchronization of coupled neural networks with or without coupling delays under stochastic perturbations. These stochastic synchronization criteria are expressed in terms of several lower-dimensional linear matrix inequalities (LMIs) and can be easily verified. Moreover, the results of this Letter are applicable to both directed and undirected weighted networks. A numerical example and its simulations are offered to show the effectiveness of our new results.

  7. Efficiency and stability of the DSBGK method

    KAUST Repository

    Li, Jun

    2012-07-09

    Recently, the DSBGK method (Note: the original name DS-BGK is changed to DSBGK for simplicity) was proposed to reduce the stochastic noise in simulating rarefied gas flows at low velocity. Its total computational time is almost independent of the magnitude of deviation from equilibrium state. It was verified by the DSMC method in different benchmark problems over a wide range of Kn number. Some simulation results of the closed lid-driven cavity flow, thermal transpiration flow and the open channel flow by the DSBGK method are given here to show its efficiency and numerical stability. In closed problems, the density distribution is subject to unphysical fluctuation due to the absence of density constraint at the boundary. Thus, many simulated molecules are employed by DSBGK simulations to improve the stability and reduce the magnitude of fluctuation. This increases the memory usage remarkably but has small influence to the efficiency of DSBGK simulations. In open problems, the DSBGK simulation remains stable when using about 10 simulated molecules per cell because the fixed number densities at open boundaries eliminate the unphysical fluctuation. Small modification to the CLL reflection model is introduced to further improve the efficiency slightly.

  8. Efficiency and stability of the DSBGK method

    KAUST Repository

    Li, Jun

    2012-01-01

    Recently, the DSBGK method (Note: the original name DS-BGK is changed to DSBGK for simplicity) was proposed to reduce the stochastic noise in simulating rarefied gas flows at low velocity. Its total computational time is almost independent of the magnitude of deviation from equilibrium state. It was verified by the DSMC method in different benchmark problems over a wide range of Kn number. Some simulation results of the closed lid-driven cavity flow, thermal transpiration flow and the open channel flow by the DSBGK method are given here to show its efficiency and numerical stability. In closed problems, the density distribution is subject to unphysical fluctuation due to the absence of density constraint at the boundary. Thus, many simulated molecules are employed by DSBGK simulations to improve the stability and reduce the magnitude of fluctuation. This increases the memory usage remarkably but has small influence to the efficiency of DSBGK simulations. In open problems, the DSBGK simulation remains stable when using about 10 simulated molecules per cell because the fixed number densities at open boundaries eliminate the unphysical fluctuation. Small modification to the CLL reflection model is introduced to further improve the efficiency slightly.

  9. First Swiss building and urban simulation conference. Conference proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zweifel, G.; Citherlet, S.; Afjei, T.; Pahud, D.; Robinson, D.; Schaelin, A.

    2010-07-01

    These contributions presented at a conference, held in 2009 in Horw, near Lucerne, Switzerland, deal with the simulation of building technical services. Three contribution blocks dealt with thermal and heating, ventilation and air-conditioning (HVAC) simulation, airflow and stochastic modelling and urban simulation. In the thermal and HVAC simulation session, the potential and limitations of building energy performance simulation is examined from an engineering perspective, a parametric study of an air heat exchanger for the cooling of buildings is presented and a comparison of measured and estimated electric energy use and the impact of assumed occupancy patterns is made. Contributions on standard solutions for energy efficient heating and cooling with heat pumps, the validation and certification of dynamic building simulation tools, standards and tools for the energy performance of buildings with a simple chiller model and the system-simulation of a central solar heating plant with seasonal duct storage in Geneva, Switzerland, are presented. In the airflow and stochastic modelling session, the optimisation of air flow in operating theatres is examined, and air-flow phenomena in flats are explained with illustrations of computational fluid dynamics (CFD). Also, the comparison of test reference years to stochastically generated time series and a comprehensive stochastic model of window usage are discussed. Contributions on the simulation of air-flow patterns and wind loads on facades and the choice of appropriate simulation techniques for the thermal analysis of double skin facades complete the session. In the final Urban Simulation session, a new CFD approach for urban flow and pollution dispersion simulation is presented, a comprehensive micro-simulation of resource flows for sustainable urban planning, multi-scale modelling of the urban climate and the optimisation of urban energy demands using an evolutionary algorithm are discussed.

  10. Stochastic models for atmospheric dispersion

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2003-01-01

    Simple stochastic differential equation models have been applied by several researchers to describe the dispersion of tracer particles in the planetary atmospheric boundary layer and to form the basis for computer simulations of particle paths. To obtain the drift coefficient, empirical vertical...... positions close to the boundaries. Different rules have been suggested in the literature with justifications based on simulation studies. Herein the relevant stochastic differential equation model is formulated in a particular way. The formulation is based on the marginal transformation of the position...... velocity distributions that depend on height above the ground both with respect to standard deviation and skewness are substituted into the stationary Fokker/Planck equation. The particle position distribution is taken to be uniform *the well/mixed condition( and also a given dispersion coefficient...

  11. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  12. Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers

    International Nuclear Information System (INIS)

    Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J.

    2014-01-01

    Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005–2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005–2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model

  13. Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J. [School of Quantitative Sciences, Universiti Utara Malaysia 06010, Sintok, Kedah (Malaysia)

    2014-12-04

    Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005–2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005–2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model.

  14. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    Science.gov (United States)

    Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.

    2014-11-01

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future

  15. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    International Nuclear Information System (INIS)

    Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.

    2014-01-01

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future

  16. Adaptive Synchronization for Two Different Stochastic Chaotic Systems with Unknown Parameters via a Sliding Mode Controller

    Directory of Open Access Journals (Sweden)

    Zengyun Wang

    2013-01-01

    Full Text Available This paper investigates the problem of synchronization for two different stochastic chaotic systems with unknown parameters and uncertain terms. The main work of this paper consists of the following aspects. Firstly, based on the Lyapunov theory in stochastic differential equations and the theory of sliding mode control, we propose a simple sliding surface and discuss the occurrence of the sliding motion. Secondly, we design an adaptive sliding mode controller to realize the asymptotical synchronization in mean squares. Thirdly, we design an adaptive sliding mode controller to realize the almost surely synchronization. Finally, the designed adaptive sliding mode controllers are used to achieve synchronization between two pairs of different stochastic chaos systems (Lorenz-Chen and Chen-Lu in the presence of the uncertainties and unknown parameters. Numerical simulations are given to demonstrate the robustness and efficiency of the proposed robust adaptive sliding mode controller.

  17. Determining the energy performance of manually controlled solar shades: A stochastic model based co-simulation analysis

    International Nuclear Information System (INIS)

    Yao, Jian

    2014-01-01

    Highlights: • Driving factor for adjustment of manually controlled solar shades was determined. • A stochastic model for manual solar shades was constructed using Markov method. • Co-simulation with Energyplus was carried out in BCVTB. • External shading even manually controlled should be used prior to LOW-E windows. • Previous studies on manual solar shades may overestimate energy savings. - Abstract: Solar shading devices play a significant role in reducing building energy consumption and maintaining a comfortable indoor condition. In this paper, a typical office building with internal roller shades in hot summer and cold winter zone was selected to determine the driving factor of control behavior of manual solar shades. Solar radiation was determined as the major factor in driving solar shading adjustment based on field measurements and logit analysis and then a stochastic model for manually adjusted solar shades was constructed by using Markov method. This model was used in BCVTB for further co-simulation with Energyplus to determine the impact of the control behavior of solar shades on energy performance. The results show that manually adjusted solar shades, whatever located inside or outside, have a relatively high energy saving performance than clear-pane windows while only external shades perform better than regularly used LOW-E windows. Simulation also indicates that using an ideal assumption of solar shade adjustment as most studies do in building simulation may lead to an overestimation of energy saving by about 16–30%. There is a need to improve occupants’ actions on shades to more effectively respond to outdoor conditions in order to lower energy consumption, and this improvement can be easily achieved by using simple strategies as a guide to control manual solar shades

  18. Stochastic plasma heating by electrostatic waves: a comparison between a particle-in-cell simulation and a laboratory experiment

    International Nuclear Information System (INIS)

    Fivaz, M.; Fasoli, A.; Appert, K.; Trans, T.M.; Tran, M.Q.; Skiff, F.

    1993-08-01

    Dynamical chaos is produced by the interaction between plasma particles and two electrostatic waves. Experiments performed in a linear magnetized plasma and a 1D particle-in-cell simulation agree qualitatively: above a threshold wave amplitude, ion stochastic diffusion and heating occur on a fast time scale. Self-consistency appears to limit the extent of the heating process. (author) 5 figs., 18 refs

  19. QB1 - Stochastic Gene Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Munsky, Brian [Los Alamos National Laboratory

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  20. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  1. Stochastic Optimal Dispatch of Virtual Power Plant considering Correlation of Distributed Generations

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2015-01-01

    Full Text Available Virtual power plant (VPP is an aggregation of multiple distributed generations, energy storage, and controllable loads. Affected by natural conditions, the uncontrollable distributed generations within VPP, such as wind and photovoltaic generations, are extremely random and relative. Considering the randomness and its correlation of uncontrollable distributed generations, this paper constructs the chance constraints stochastic optimal dispatch of VPP including stochastic variables and its random correlation. The probability distributions of independent wind and photovoltaic generations are described by empirical distribution functions, and their joint probability density model is established by Frank-copula function. And then, sample average approximation (SAA is applied to convert the chance constrained stochastic optimization model into a deterministic optimization model. Simulation cases are calculated based on the AIMMS. Simulation results of this paper mathematic model are compared with the results of deterministic optimization model without stochastic variables and stochastic optimization considering stochastic variables but not random correlation. Furthermore, this paper analyzes how SAA sampling frequency and the confidence level influence the results of stochastic optimization. The numerical example results show the effectiveness of the stochastic optimal dispatch of VPP considering the randomness and its correlations of distributed generations.

  2. StochPy: A Comprehensive, User-Friendly Tool for Simulating Stochastic Biological Processes

    NARCIS (Netherlands)

    T.R. Maarleveld (Timo); B.G. Olivier (Brett); F.J. Bruggeman (Frank)

    2013-01-01

    htmlabstractSingle-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models

  3. Numerical studies of the stochastic Korteweg-de Vries equation

    International Nuclear Information System (INIS)

    Lin Guang; Grinberg, Leopold; Karniadakis, George Em

    2006-01-01

    We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation

  4. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models 02.50.Ey Stochastic processes; 64.60.Ht Dynamic critical phenomena; 02.70.-c Computational techniques; 05.10.Cc Renormalization group methods;

    CERN Document Server

    Kemper, A; Nishino, T; Schadschneider, A; Zittartz, J

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 10 sup 5 shows a considerable improvement on the old stochastic TMRG algorithm.

  5. A higher-order numerical framework for stochastic simulation of chemical reaction systems.

    KAUST Repository

    Szé kely, Tamá s; Burrage, Kevin; Erban, Radek; Zygalakis, Konstantinos C

    2012-01-01

    , to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate

  6. Efficiency model of Russian banks

    OpenAIRE

    Pavlyuk, Dmitry

    2006-01-01

    The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.

  7. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  8. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  9. Stochastic Frontier Approach and Data Envelopment Analysis to Total Factor Productivity and Efficiency Measurement of Bangladeshi Rice

    Science.gov (United States)

    Hossain, Md. Kamrul; Kamil, Anton Abdulbasah; Baten, Md. Azizul; Mustafa, Adli

    2012-01-01

    The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989–2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500

  10. Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses

    Science.gov (United States)

    Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation

    2011-12-01

    Water energy linkages have received growing attention from the water and energy utilities as utilities recognize that collaborative efforts can implement more effective conservation and efficiency improvement programs at lower cost with less effort. To date, limited energy-water household data has allowed only deterministic analysis for average, representative households and required coarse assumptions - like the water heater (the primary energy use in a home apart from heating and cooling) be a single end use. Here, we use recent available disaggregated hot and cold water household end-use data to estimate water and energy consumption for toilet, shower, faucet, dishwasher, laundry machine, leaks, and other household uses and savings from appliance retrofits. The disaggregated hot water and bulk water end-use data was previously collected by the USEPA for 96 single family households in Seattle WA and Oakland CA, and Tampa FL between the period from 2000 and 2003 for two weeks before and four weeks after each household was retrofitted with water efficient appliances. Using the disaggregated data, we developed a stochastic model that represents factors that influence water use for each appliance: behavioral (use frequency and duration), demographical (household size), and technological (use volume or flowrate). We also include stochastic factors that govern energy to heat hot water: hot water fraction (percentage of hot water volume to total water volume used in a certain end-use event), heater water intake and dispense temperatures, and energy source for the heater (gas, electric, etc). From the empirical household end-use data, we derive stochastic probability distributions for each water and energy factor where each distribution represents the range and likelihood of values that the factor may take. The uncertainty of the stochastic water and energy factors is propagated using Monte Carlo simulations to calculate the composite probability distribution for water

  11. Portfolio management of hydropower producer via stochastic programming

    International Nuclear Information System (INIS)

    Liu, Hongling; Jiang, Chuanwen; Zhang, Yan

    2009-01-01

    This paper presents a stochastic linear programming framework for the hydropower portfolio management problem with uncertainty in market prices and inflows on medium term. The uncertainty is modeled as a scenario tree using the Monte Carlo simulation method, and the objective is to maximize the expected revenue over the entire scenario tree. The portfolio decisions of the stochastic model are formulated as a tradeoff involving different scenarios. Numerical results illustrate the impact of uncertainty on the portfolio management decisions, and indicate the significant value of stochastic solution. (author)

  12. Stochastic failure modelling of unidirectional composite ply failure

    International Nuclear Information System (INIS)

    Whiteside, M.B.; Pinho, S.T.; Robinson, P.

    2012-01-01

    Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.

  13. Extending Stochastic Network Calculus to Loss Analysis

    Directory of Open Access Journals (Sweden)

    Chao Luo

    2013-01-01

    Full Text Available Loss is an important parameter of Quality of Service (QoS. Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor.

  14. Stochastic Frontier Estimation of Efficient Learning in Video Games

    Science.gov (United States)

    Hamlen, Karla R.

    2012-01-01

    Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…

  15. Digital hardware implementation of a stochastic two-dimensional neuron model.

    Science.gov (United States)

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Development of stochastic indicator models of lithology, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Rautman, C.A.; Robey, T.H.

    1994-01-01

    Indicator geostatistical techniques have been used to produce a number of fully three-dimensional stochastic simulations of large-scale lithologic categories at the Yucca Mountain site. Each realization reproduces the available drill hole data used to condition the simulation. Information is propagated away from each point of observation in accordance with a mathematical model of spatial continuity inferred through soft data taken from published geologic cross sections. Variations among the simulated models collectively represent uncertainty in the lithology at unsampled locations. These stochastic models succeed in capturing many major features of welded-nonwelded lithologic framework of Yucca Mountain. However, contacts between welded and nonwelded rock types for individual simulations appear more complex than suggested by field observation, and a number of probable numerical artifacts exist in these models. Many of the apparent discrepancies between the simulated models and the general geology of Yucca Mountain represent characterization uncertainty, and can be traced to the sparse site data used to condition the simulations. Several vertical stratigraphic columns have been extracted from the three-dimensional stochastic models for use in simplified total-system performance assessment exercises. Simple, manual adjustments are required to eliminate the more obvious simulation artifacts and to impose a secondary set of deterministic geologic features on the overall stratigraphic framework provided by the indictor models

  17. A framework for stochastic simulation of distribution practices for hotel reservations

    Energy Technology Data Exchange (ETDEWEB)

    Halkos, George E.; Tsilika, Kyriaki D. [Laboratory of Operations Research, Department of Economics, University of Thessaly, Korai 43, 38 333, Volos (Greece)

    2015-03-10

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system.

  18. A framework for stochastic simulation of distribution practices for hotel reservations

    International Nuclear Information System (INIS)

    Halkos, George E.; Tsilika, Kyriaki D.

    2015-01-01

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system

  19. STOCHASTIC MODEL OF THE SPIN DISTRIBUTION OF DARK MATTER HALOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Juhan [Center for Advanced Computation, Korea Institute for Advanced Study, Heogiro 85, Seoul 130-722 (Korea, Republic of); Choi, Yun-Young [Department of Astronomy and Space Science, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of); Kim, Sungsoo S.; Lee, Jeong-Eun [School of Space Research, Kyung Hee University, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    We employ a stochastic approach to probing the origin of the log-normal distributions of halo spin in N-body simulations. After analyzing spin evolution in halo merging trees, it was found that a spin change can be characterized by a stochastic random walk of angular momentum. Also, spin distributions generated by random walks are fairly consistent with those directly obtained from N-body simulations. We derived a stochastic differential equation from a widely used spin definition and measured the probability distributions of the derived angular momentum change from a massive set of halo merging trees. The roles of major merging and accretion are also statistically analyzed in evolving spin distributions. Several factors (local environment, halo mass, merging mass ratio, and redshift) are found to influence the angular momentum change. The spin distributions generated in the mean-field or void regions tend to shift slightly to a higher spin value compared with simulated spin distributions, which seems to be caused by the correlated random walks. We verified the assumption of randomness in the angular momentum change observed in the N-body simulation and detected several degrees of correlation between walks, which may provide a clue for the discrepancies between the simulated and generated spin distributions in the voids. However, the generated spin distributions in the group and cluster regions successfully match the simulated spin distribution. We also demonstrated that the log-normality of the spin distribution is a natural consequence of the stochastic differential equation of the halo spin, which is well described by the Geometric Brownian Motion model.

  20. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    Science.gov (United States)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)

  1. Symplectic Integrators to Stochastic Hamiltonian Dynamical Systems Derived from Composition Methods

    Directory of Open Access Journals (Sweden)

    Tetsuya Misawa

    2010-01-01

    Full Text Available “Symplectic” schemes for stochastic Hamiltonian dynamical systems are formulated through “composition methods (or operator splitting methods” proposed by Misawa (2001. In the proposed methods, a symplectic map, which is given by the solution of a stochastic Hamiltonian system, is approximated by composition of the stochastic flows derived from simpler Hamiltonian vector fields. The global error orders of the numerical schemes derived from the stochastic composition methods are provided. To examine the superiority of the new schemes, some illustrative numerical simulations on the basis of the proposed schemes are carried out for a stochastic harmonic oscillator system.

  2. Role of computational efficiency in process simulation

    Directory of Open Access Journals (Sweden)

    Kurt Strand

    1989-07-01

    Full Text Available It is demonstrated how efficient numerical algorithms may be combined to yield a powerful environment for analysing and simulating dynamic systems. The importance of using efficient numerical algorithms is emphasized and demonstrated through examples from the petrochemical industry.

  3. Stochastic resonance in bistable systems driven by harmonic noise

    International Nuclear Information System (INIS)

    Neiman, A.; Schimansky-Geier, L.

    1994-01-01

    We study stochastic resonance in a bistable system which is excited simultaneously by white and harmonic noise which we understand as the signal. In our case the spectral line of the signal has a finite width as it occurs in many real situations. Using techniques of cumulant analysis as well as computer simulations we find that the effect of stochastic resonance is preserved in the case of harmonic noise excitation. Moreover we show that the width of the spectral line of the signal at the output can be decreased via stochastic resonance. The last could be of importance in the practical using of the stochastic resonance

  4. Complexity, rate of energy exchanges and stochasticity

    International Nuclear Information System (INIS)

    Casartelli, M.; Sello, S.

    1987-01-01

    The complexity of trajectories in the phase of anharmonic crystal (mostly a Lennard-Jones chain) is analysed by the variance of microcanonical density and by new parameters P and chi defined, respectively, as the mean value of the time averages and the relative variance of the absolute exchange rate of energies among the normal modes. Evidence is given to the trapping action of residual invariant surfaces in low stochastic regime of motion. The parameter chi, moreover, proves efficient in exploring the border of stochasticity. A simple power law for P vs. the specific energy is obtained and proved to be independent of stochasticity and of the type of anharmonic potential

  5. Design Of Combined Stochastic Feedforward/Feedback Control

    Science.gov (United States)

    Halyo, Nesim

    1989-01-01

    Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.

  6. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  7. Simulation Based Data Reconciliation for Monitoring Power Plant Efficiency

    International Nuclear Information System (INIS)

    Park, Sang Jun; Heo, Gyun Young

    2010-01-01

    Power plant efficiency is analyzed by using measured values, mass/energy balance principles, and several correlations. Since the measured values can have uncertainty depending on the accuracy of instrumentation, the results of plant efficiency should definitely have uncertainty. The certainty may occur due to either the randomness or the malfunctions of a process. In order to improve the accuracy of efficiency analysis, the data reconciliation (DR) is expected as a good candidate because the mathematical algorithm of the DR is based on the first principles such as mass and energy balance considering the uncertainty of instrumentation. It should be noted that the mass and energy balance model for analyzing power plant efficiency is equivalent to a steady-state simulation of a plant system. Therefore the DR for efficiency analysis necessitates the simulation which can deal with the uncertainty of instrumentation. This study will propose the algorithm of the simulation based DR which is applicable to power plant efficiency monitoring

  8. A multi-stage stochastic transmission expansion planning method

    International Nuclear Information System (INIS)

    Akbari, Tohid; Rahimikian, Ashkan; Kazemi, Ahad

    2011-01-01

    Highlights: → We model a multi-stage stochastic transmission expansion planning problem. → We include available transfer capability (ATC) in our model. → Involving this criterion will increase the ATC between source and sink points. → Power system reliability will be increased and more money can be saved. - Abstract: This paper presents a multi-stage stochastic model for short-term transmission expansion planning considering the available transfer capability (ATC). The ATC can have a huge impact on the power market outcomes and the power system reliability. The transmission expansion planning (TEP) studies deal with many uncertainties, such as system load uncertainties that are considered in this paper. The Monte Carlo simulation method has been applied for generating different scenarios. A scenario reduction technique is used for reducing the number of scenarios. The objective is to minimize the sum of investment costs (IC) and the expected operation costs (OC). The solution technique is based on the benders decomposition algorithm. The N-1 contingency analysis is also done for the TEP problem. The proposed model is applied to the IEEE 24 bus reliability test system and the results are efficient and promising.

  9. Hybrid Semantics of Stochastic Programs with Dynamic Reconfiguration

    Directory of Open Access Journals (Sweden)

    Alberto Policriti

    2009-10-01

    Full Text Available We begin by reviewing a technique to approximate the dynamics of stochastic programs --written in a stochastic process algebra-- by a hybrid system, suitable to capture a mixed discrete/continuous evolution. In a nutshell, the discrete dynamics is kept stochastic while the continuous evolution is given in terms of ODEs, and the overall technique, therefore, naturally associates a Piecewise Deterministic Markov Process with a stochastic program. The specific contribution in this work consists in an increase of the flexibility of the translation scheme, obtained by allowing a dynamic reconfiguration of the degree of discreteness/continuity of the semantics. We also discuss the relationships of this approach with other hybrid simulation strategies for biochemical systems.

  10. Dynamic and stochastic multi-project planning

    CERN Document Server

    Melchiors, Philipp

    2015-01-01

    This book deals with dynamic and stochastic methods for multi-project planning. Based on the idea of using queueing networks for the analysis of dynamic-stochastic multi-project environments this book addresses two problems: detailed scheduling of project activities, and integrated order acceptance and capacity planning. In an extensive simulation study, the book thoroughly investigates existing scheduling policies. To obtain optimal and near optimal scheduling policies new models and algorithms are proposed based on the theory of Markov decision processes and Approximate Dynamic programming.

  11. A continuous stochastic model for non-equilibrium dense gases

    Science.gov (United States)

    Sadr, M.; Gorji, M. H.

    2017-12-01

    While accurate simulations of dense gas flows far from the equilibrium can be achieved by direct simulation adapted to the Enskog equation, the significant computational demand required for collisions appears as a major constraint. In order to cope with that, an efficient yet accurate solution algorithm based on the Fokker-Planck approximation of the Enskog equation is devised in this paper; the approximation is very much associated with the Fokker-Planck model derived from the Boltzmann equation by Jenny et al. ["A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion," J. Comput. Phys. 229, 1077-1098 (2010)] and Gorji et al. ["Fokker-Planck model for computational studies of monatomic rarefied gas flows," J. Fluid Mech. 680, 574-601 (2011)]. The idea behind these Fokker-Planck descriptions is to project the dynamics of discrete collisions implied by the molecular encounters into a set of continuous Markovian processes subject to the drift and diffusion. Thereby, the evolution of particles representing the governing stochastic process becomes independent from each other and thus very efficient numerical schemes can be constructed. By close inspection of the Enskog operator, it is observed that the dense gas effects contribute further to the advection of molecular quantities. That motivates a modelling approach where the dense gas corrections can be cast in the extra advection of particles. Therefore, the corresponding Fokker-Planck approximation is derived such that the evolution in the physical space accounts for the dense effects present in the pressure, stress tensor, and heat fluxes. Hence the consistency between the devised Fokker-Planck approximation and the Enskog operator is shown for the velocity moments up to the heat fluxes. For validation studies, a homogeneous gas inside a box besides Fourier, Couette, and lid-driven cavity flow setups is considered. The results based on the Fokker-Planck model are

  12. An Application of a Stochastic Semi-Continuous Simulation Method for Flood Frequency Analysis: A Case Study in Slovakia

    Science.gov (United States)

    Valent, Peter; Paquet, Emmanuel

    2017-09-01

    A reliable estimate of extreme flood characteristics has always been an active topic in hydrological research. Over the decades a large number of approaches and their modifications have been proposed and used, with various methods utilizing continuous simulation of catchment runoff, being the subject of the most intensive research in the last decade. In this paper a new and promising stochastic semi-continuous method is used to estimate extreme discharges in two mountainous Slovak catchments of the rivers Váh and Hron, in which snow-melt processes need to be taken into account. The SCHADEX method used, couples a precipitation probabilistic model with a rainfall-runoff model used to both continuously simulate catchment hydrological conditions and to transform generated synthetic rainfall events into corresponding discharges. The stochastic nature of the method means that a wide range of synthetic rainfall events were simulated on various historical catchment conditions, taking into account not only the saturation of soil, but also the amount of snow accumulated in the catchment. The results showed that the SCHADEX extreme discharge estimates with return periods of up to 100 years were comparable to those estimated by statistical approaches. In addition, two reconstructed historical floods with corresponding return periods of 100 and 1000 years were compared to the SCHADEX estimates. The results confirmed the usability of the method for estimating design discharges with a recurrence interval of more than 100 years and its applicability in Slovak conditions.

  13. Multisite stochastic simulation of daily precipitation from copula modeling with a gamma marginal distribution

    Science.gov (United States)

    Lee, Taesam

    2018-05-01

    Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.

  14. On the neutron noise diagnostics of pressurized water reactor control rod vibrations II. Stochastic vibrations

    International Nuclear Information System (INIS)

    Pazsit, I.; Glockler, O.

    1984-01-01

    In an earlier publication, using the theory of neutron fluctuations induced by a vibrating control rod, a complete formal solution of rod vibration diagnostics based on neutron noise measurements was given in terms of Fourier-transformed neutron detector time signals. The suggested procedure was checked in numerical simulation tests where only periodic vibrations could be considered. The procedure and its numerical testing are elaborated for stochastic two-dimensional vibrations. A simple stochastic theory of two-dimensional flow-induced vibrations is given; then the diagnostic method is formulated in the stochastic case, that is, in terms of neutron detector auto- and crosspower spectra. A previously suggested approximate rod localization technique is also formulated in the stochastic case. Applicability of the methods is then investigated in numerical simulation tests, using the proposed model of stochastic two-dimensional vibrations when generating neutron detector spectra that simulate measured data

  15. Electricity market clearing with improved dispatch of stochastic production

    DEFF Research Database (Denmark)

    Morales González, Juan Miguel; Zugno, Marco; Pineda, Salvador

    2014-01-01

    In this paper, we consider an electricity market that consists of a day-ahead and a balancing settlement, and includes a number of stochastic producers. We first introduce two reference procedures for scheduling and pricing energy in the day-ahead market: on the one hand, a conventional network...... attains higher market efficiency in expectation than the conventional day-ahead auction, it suffers from fundamental drawbacks with a view to its practical implementation. In particular, it requires flexible producers (those that make up for the lack or surplus of stochastic generation) to accept losses...... in some scenarios. Using a bilevel programming framework, we then show that the conventional auction, if combined with a suitable day-ahead dispatch of stochastic producers (generally different from their expected production), can substantially increase market efficiency and emulate the advantageous...

  16. Stochastic learning in oxide binary synaptic device for neuromorphic computing.

    Science.gov (United States)

    Yu, Shimeng; Gao, Bin; Fang, Zheng; Yu, Hongyu; Kang, Jinfeng; Wong, H-S Philip

    2013-01-01

    Hardware implementation of neuromorphic computing is attractive as a computing paradigm beyond the conventional digital computing. In this work, we show that the SET (off-to-on) transition of metal oxide resistive switching memory becomes probabilistic under a weak programming condition. The switching variability of the binary synaptic device implements a stochastic learning rule. Such stochastic SET transition was statistically measured and modeled for a simulation of a winner-take-all network for competitive learning. The simulation illustrates that with such stochastic learning, the orientation classification function of input patterns can be effectively realized. The system performance metrics were compared between the conventional approach using the analog synapse and the approach in this work that employs the binary synapse utilizing the stochastic learning. The feasibility of using binary synapse in the neurormorphic computing may relax the constraints to engineer continuous multilevel intermediate states and widens the material choice for the synaptic device design.

  17. Stochastic calculus an introduction through theory and exercises

    CERN Document Server

    Baldi, Paolo

    2017-01-01

    This book provides a comprehensive introduction to the theory of stochastic calculus and some of its applications. It is the only textbook on the subject to include more than two hundred exercises with complete solutions. After explaining the basic elements of probability, the author introduces more advanced topics such as Brownian motion, martingales and Markov processes. The core of the book covers stochastic calculus, including stochastic differential equations, the relationship to partial differential equations, numerical methods and simulation, as well as applications of stochastic processes to finance. The final chapter provides detailed solutions to all exercises, in some cases presenting various solution techniques together with a discussion of advantages and drawbacks of the methods used. Stochastic Calculus will be particularly useful to advanced undergraduate and graduate students wishing to acquire a solid understanding of the subject through the theory and exercises. Including full mathematical ...

  18. Trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus

    International Nuclear Information System (INIS)

    Du, Yongchang; Zhao, Yue; Wang, Qinpu; Zhang, Yuanbo; Xia, Huaicheng

    2016-01-01

    A trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus is presented in this paper, which includes the offline stochastic dynamic programming part and the online implementation part performed by equivalent consumption minimization strategy. In the offline part, historical driving cycles of the fixed route are divided into segments according to the position of bus stops, and then a segment-based stochastic driving condition model based on Markov chain is built. With the segment-based stochastic model obtained, the control set for real-time implemented equivalent consumption minimization strategy can be achieved by solving the offline stochastic dynamic programming problem. Results of stochastic dynamic programming are converted into a 3-dimensional lookup table of parameters for online implemented equivalent consumption minimization strategy. The proposed strategy is verified by both simulation and hardware-in-loop test of real-world driving cycle on an urban bus route. Simulation results show that the proposed method outperforms both the well-tuned equivalent consumption minimization strategy and the rule-based strategy in terms of fuel economy, and even proved to be close to the optimal result obtained by dynamic programming. Furthermore, the practical application potential of the proposed control method was proved by hardware-in-loop test. - Highlights: • A stochastic problem was formed based on a stochastic segment-based driving condition model. • Offline stochastic dynamic programming was employed to solve the stochastic problem. • The instant power split decision was made by the online equivalent consumption minimization strategy. • Good performance in fuel economy of the proposed method was verified by simulation results. • Practical application potential of the proposed method was verified by the hardware-in-loop test results.

  19. Influence of Signal Stationarity on Digital Stochastic Measurement Implementation

    Directory of Open Access Journals (Sweden)

    Ivan Župunski

    2013-06-01

    Full Text Available The paper presents the influence of signal stationarity on digital stochastic measurement method implementation. The implementation method is based on stochastic voltage generators, analog adders, low resolution A/D converter, and multipliers and accumulators implemented by Field-Programmable Gate Array (FPGA. The characteristic of first implementations of digital stochastic measurement was the measurement of stationary signal harmonics over the constant measurement period. Later, digital stochastic measurement was extended and used also when it was necessary to measure timeseries of non-stationary signal over the variable measurement time. The result of measurement is the set of harmonics, which is, in the case of non-stationary signals, the input for calculating digital values of signal in time domain. A theoretical approach to determine measurement uncertainty is presented and the accuracy trends with varying signal-to-noise ratio (SNR are analyzed. Noisy brain potentials (spontaneous and nonspontaneous are selected as an example of real non-stationary signal and its digital stochastic measurement is tested by simulations and experiments. Tests were performed without noise and with adding noise with SNR values of 10dB, 0dB and - 10dB. The results of simulations and experiments are compared versus theory calculations, and comparasion confirms the theory.

  20. Stochasticity of phase trajectory of a charged particle in a plasma wave

    International Nuclear Information System (INIS)

    Murakami, Akihiko; Nomura, Yasuyuki; Momota, Hiromu.

    1980-06-01

    Stochastic behavior of charged particles in finite amplitude plasma waves is examined by means of particle simulations under the condition that Chirikov's criterion is broken down. The process of growint the stochastic region is clarified and accordingly the width of the stochastic region is discussed. Discussions on the effects of higher order resonances are also presented. (author)

  1. An h-adaptive stochastic collocation method for stochastic EMC/EMI analysis

    KAUST Repository

    Yücel, Abdulkadir C.

    2010-07-01

    The analysis of electromagnetic compatibility and interference (EMC/EMI) phenomena is often fraught by randomness in a system\\'s excitation (e.g., the amplitude, phase, and location of internal noise sources) or configuration (e.g., the routing of cables, the placement of electronic systems, component specifications, etc.). To bound the probability of system malfunction, fast and accurate techniques to quantify the uncertainty in system observables (e.g., voltages across mission-critical circuit elements) are called for. Recently proposed stochastic frameworks [1-2] combine deterministic electromagnetic (EM) simulators with stochastic collocation (SC) methods that approximate system observables using generalized polynomial chaos expansion (gPC) [3] (viz. orthogonal polynomials spanning the entire random domain) to estimate their statistical moments and probability density functions (pdfs). When constructing gPC expansions, the EM simulator is used solely to evaluate system observables at collocation points prescribed by the SC-gPC scheme. The frameworks in [1-2] therefore are non-intrusive and straightforward to implement. That said, they become inefficient and inaccurate for system observables that vary rapidly or are discontinuous in the random variables (as their representations may require very high-order polynomials). © 2010 IEEE.

  2. Backward-stochastic-differential-equation approach to modeling of gene expression.

    Science.gov (United States)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  3. Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization

    Science.gov (United States)

    Golari, Mehdi

    considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.

  4. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.

  5. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    Science.gov (United States)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  6. Stochastic first passage time accelerated with CUDA

    Science.gov (United States)

    Pierro, Vincenzo; Troiano, Luigi; Mejuto, Elena; Filatrella, Giovanni

    2018-05-01

    The numerical integration of stochastic trajectories to estimate the time to pass a threshold is an interesting physical quantity, for instance in Josephson junctions and atomic force microscopy, where the full trajectory is not accessible. We propose an algorithm suitable for efficient implementation on graphical processing unit in CUDA environment. The proposed approach for well balanced loads achieves almost perfect scaling with the number of available threads and processors, and allows an acceleration of about 400× with a GPU GTX980 respect to standard multicore CPU. This method allows with off the shell GPU to challenge problems that are otherwise prohibitive, as thermal activation in slowly tilted potentials. In particular, we demonstrate that it is possible to simulate the switching currents distributions of Josephson junctions in the timescale of actual experiments.

  7. Approximate models for broken clouds in stochastic radiative transfer theory

    International Nuclear Information System (INIS)

    Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas

    2014-01-01

    This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models

  8. A stochastic model of nanoparticle self-assembly on Cayley trees

    International Nuclear Information System (INIS)

    Mazilu, I; Schwen, E M; Banks, W E; Pope, B K; Mazilu, D A

    2015-01-01

    Nanomedicine is an emerging area of medical research that uses innovative nanotechnologies to improve the delivery of therapeutic and diagnostic agents with maximum clinical benefit. We present a versatile stochastic model that can be used to capture the basic features of drug encapsulation of nanoparticles on tree-like synthetic polymers called dendrimers. The geometry of a dendrimer is described mathematically as a Cayley tree. We use our stochastic model to study the dynamics of deposition and release of monomers (simulating the drug molecules) on Cayley trees (simulating dendrimers). We present analytical and Monte Carlo simulation results for the particle density on Cayley trees of coordination number three and four

  9. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  10. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  11. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    Directory of Open Access Journals (Sweden)

    A. Muhammad

    2017-12-01

    Full Text Available This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0 that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan – including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal–vertical evacuation time maps – has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  12. Integral-based event triggering controller design for stochastic LTI systems via convex optimisation

    Science.gov (United States)

    Mousavi, S. H.; Marquez, H. J.

    2016-07-01

    The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.

  13. Why simulation can be efficient: on the preconditions of efficient learning in complex technology based practices.

    Science.gov (United States)

    Hofmann, Bjørn

    2009-07-23

    It is important to demonstrate learning outcomes of simulation in technology based practices, such as in advanced health care. Although many studies show skills improvement and self-reported change to practice, there are few studies demonstrating patient outcome and societal efficiency. The objective of the study is to investigate if and why simulation can be effective and efficient in a hi-tech health care setting. This is important in order to decide whether and how to design simulation scenarios and outcome studies. Core theoretical insights in Science and Technology Studies (STS) are applied to analyze the field of simulation in hi-tech health care education. In particular, a process-oriented framework where technology is characterized by its devices, methods and its organizational setting is applied. The analysis shows how advanced simulation can address core characteristics of technology beyond the knowledge of technology's functions. Simulation's ability to address skilful device handling as well as purposive aspects of technology provides a potential for effective and efficient learning. However, as technology is also constituted by organizational aspects, such as technology status, disease status, and resource constraints, the success of simulation depends on whether these aspects can be integrated in the simulation setting as well. This represents a challenge for future development of simulation and for demonstrating its effectiveness and efficiency. Assessing the outcome of simulation in education in hi-tech health care settings is worthwhile if core characteristics of medical technology are addressed. This challenges the traditional technical versus non-technical divide in simulation, as organizational aspects appear to be part of technology's core characteristics.

  14. Stochastic chaos in a Duffing oscillator and its control

    International Nuclear Information System (INIS)

    Wu Cunli; Lei Youming; Fang Tong

    2006-01-01

    Stochastic chaos discussed here means a kind of chaotic responses in a Duffing oscillator with bounded random parameters under harmonic excitations. A system with random parameters is usually called a stochastic system. The modifier 'stochastic' here implies dependent on some random parameter. As the system itself is stochastic, so is the response, even under harmonic excitations alone. In this paper stochastic chaos and its control are verified by the top Lyapunov exponent of the system. A non-feedback control strategy is adopted here by adding an adjustable noisy phase to the harmonic excitation, so that the control can be realized by adjusting the noise level. It is found that by this control strategy stochastic chaos can be tamed down to the small neighborhood of a periodic trajectory or an equilibrium state. In the analysis the stochastic Duffing oscillator is first transformed into an equivalent deterministic nonlinear system by the Gegenbauer polynomial approximation, so that the problem of controlling stochastic chaos can be reduced into the problem of controlling deterministic chaos in the equivalent system. Then the top Lyapunov exponent of the equivalent system is obtained by Wolf's method to examine the chaotic behavior of the response. Numerical simulations show that the random phase control strategy is an effective way to control stochastic chaos

  15. Modelling biochemical reaction systems by stochastic differential equations with reflection.

    Science.gov (United States)

    Niu, Yuanling; Burrage, Kevin; Chen, Luonan

    2016-05-07

    In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Groundwater management under uncertainty using a stochastic multi-cell model

    Science.gov (United States)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  17. Objective mapping of observed sub-surface mesoscale cold core eddy in the Bay of Bengal by stochastic inverse technique with tomographically simulated travel times

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.; Sridevi, B.; Maneesha, K.; SujithKumar, S.; Prasanna, P.L.; Murthy, K.S.R.

    of Bengal during south-west monsoon season and explore possibility to reconstruct the acoustic profile of the eddy by Stochastic Inverse Technique. A simulation experiment on forward and inverse problems for observed sound velocity perturbation field has...

  18. Efficient computation of discounted asymmetric information zero-sum stochastic games

    KAUST Repository

    Li, Lichun; Shamma, Jeff S.

    2015-01-01

    In asymmetric information zero-sum games, one player has superior information about the game over the other. Asymmetric information games are particularly relevant for security problems, e.g., where an attacker knows its own skill set or alternatively a system administrator knows the state of its resources. In such settings, the informed player is faced with the tradeoff of exploiting its superior information at the cost of revealing its superior information. This tradeoff is typically addressed through randomization, in an effort to keep the uninformed player informationally off balance. A lingering issue is the explicit computation of such strategies. This paper, building on prior work for repeated games, presents an LP formulation to compute suboptimal strategies for the informed player in discounted asymmetric information stochastic games in which state transitions are not affected by the uninformed player. Furthermore, the paper presents bounds between the security level guaranteed by the sub-optimal strategy and the optimal value. The results are illustrated on a stochastic intrusion detection problem.

  19. Efficient computation of discounted asymmetric information zero-sum stochastic games

    KAUST Repository

    Li, Lichun

    2015-12-15

    In asymmetric information zero-sum games, one player has superior information about the game over the other. Asymmetric information games are particularly relevant for security problems, e.g., where an attacker knows its own skill set or alternatively a system administrator knows the state of its resources. In such settings, the informed player is faced with the tradeoff of exploiting its superior information at the cost of revealing its superior information. This tradeoff is typically addressed through randomization, in an effort to keep the uninformed player informationally off balance. A lingering issue is the explicit computation of such strategies. This paper, building on prior work for repeated games, presents an LP formulation to compute suboptimal strategies for the informed player in discounted asymmetric information stochastic games in which state transitions are not affected by the uninformed player. Furthermore, the paper presents bounds between the security level guaranteed by the sub-optimal strategy and the optimal value. The results are illustrated on a stochastic intrusion detection problem.

  20. Groundwater Management at Varamin Plain: The Consideration of Stochastic and Environmental Effects

    International Nuclear Information System (INIS)

    Najafi Alamdarlo, H.; Ahmadian, M.; Khalilian, S.

    2016-01-01

    Groundwater is one of the common resources in Varamin Plain, but due to over extraction it has been exposed to ruin. This phenomenon will lead to economic and environmental problems. On the other hand, the world is expected to face with more stochastic events of water supply. Furthermore, incorporating stochastic consideration of water supply becomes more acute in designing water facilities. Therefore, the strategies should be applied to improve managing resources and increase the efficiency of irrigation system. Hence, in this study the effect of efficiency improvement of irrigation system on the exploitation of groundwater and cropping pattern is examined in deterministic and stochastic condition using Nash bargaining theory. The results showed that farmers in B scenario are more willing to cooperate and as a result of their cooperation, they lose only 3 percentages of their present value of the objective function. Therefore, the efficiency improvement of irrigation system can result in improving the cooperation between farmers and increasing the amount of reserves.Groundwater is one of the common resources in Varamin Plain, but due to over extraction it has been exposed to ruin. This phenomenon will lead to economic and environmental problems. On the other hand, the world is expected to face with more stochastic events of water supply. Furthermore, incorporating stochastic consideration of water supply becomes more acute in designing water facilities. Therefore, the strategies should be applied to improve managing resources and increase the efficiency of irrigation system. Hence, in this study the effect of efficiency improvement of irrigation system on the exploitation of groundwater and cropping pattern is examined in deterministic and stochastic condition using Nash bargaining theory. The results showed that farmers in B scenario are more willing to cooperate and as a result of their cooperation, they lose only 3 percentages of their present value of the