WorldWideScience

Sample records for efficient stochastic simulations

  1. Efficient Computing Budget Allocation for Simulation-based Optimization with Stochastic Simulation Time

    OpenAIRE

    Jia, Qing-Shan

    2012-01-01

    The dynamics of many systems nowadays follow not only physical laws but also man-made rules. These systems are known as discrete event dynamic systems and their performances can be accurately evaluated only through simulations. Existing studies on simulation-based optimization (SBO) usually assume deterministic simulation time for each replication. However, in many applications such as evacuation, smoke detection, and territory exploration, the simulation time is stochastic due to the randomn...

  2. An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems

    KAUST Repository

    Kuwahara, Hiroyuki

    2011-01-01

    Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.

  3. Conditional Stochastic Models in Reduced Space: Towards Efficient Simulation of Tropical Cyclone Precipitation Patterns

    Science.gov (United States)

    Dodov, B.

    2017-12-01

    Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon

  4. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  5. An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays

    Directory of Open Access Journals (Sweden)

    Laurenzi Ian J

    2009-12-01

    Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.

  6. Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    Science.gov (United States)

    Elliott, Thomas J.; Gu, Mile

    2018-03-01

    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.

  7. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    2013-05-01

    Full Text Available The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of action-potential arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic action potential, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  8. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  9. AESS: Accelerated Exact Stochastic Simulation

    Science.gov (United States)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  10. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  11. Testing for Stochastic Dominance Efficiency

    NARCIS (Netherlands)

    G.T. Post (Thierry); O. Linton; Y-J. Whang

    2005-01-01

    textabstractWe propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our

  12. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  14. Simulation of Stochastic Loads for Fatigue Experiments

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Brincker, Rune

    1989-01-01

    A simple direct simulation method for stochastic fatigue-load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...... process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and results in a simulation speed of about 3000 load cycles per second...

  15. Simulation of Stochastic Loads for Fatigue Experiments

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Brincker, Rune

    1989-01-01

    process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and results in a simulation speed of about 3000 load cycles per second......A simple direct simulation method for stochastic fatigue-load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...

  16. Simulation of Stochastic Loads for Fatigue Experiments

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Brincker, Rune

    process by a Markov process. Two different spectra from two tubular joints in an offshore structure (one narrow banded and one wide banded) are considered in an example. The results show that the simple direct method is quite efficient and is results in a simulation speed at about 3000 load cycles per......A simple direct simulation method for stochastic fatigue load generation is described in this paper. The simulation method is based on the assumption that only the peaks of the load process significantly affect the fatigue life. The method requires the conditional distribution functions of load...... ranges given the last peak values. Analytical estimates of these distribution functions are presented in the paper and compared with estimates based on a more accurate simulation method. In the more accurate simulation method samples at equidistant times are generated by approximating the stochastic load...

  17. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  18. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  19. Efficient Diversification According to Stochastic Dominance Criteria

    NARCIS (Netherlands)

    Kuosmanen, T.K.

    2004-01-01

    This paper develops the first operational tests of portfolio efficiency based on the general stochastic dominance (SD) criteria that account for an infinite set of diversification strategies. The main insight is to preserve the cross-sectional dependence of asset returns when forming portfolios by

  20. A stochastic view on column efficiency.

    Science.gov (United States)

    Gritti, Fabrice

    2018-03-09

    A stochastic model of transcolumn eddy dispersion along packed beds was derived. It was based on the calculation of the mean travel time of a single analyte molecule from one radial position to another. The exchange mechanism between two radial positions was governed by the transverse dispersion of the analyte across the column. The radial velocity distribution was obtained by flow simulations in a focused-ion-beam scanning electron microscopy (FIB-SEM) based 3D reconstruction from a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 particles. Accordingly, the packed bed was divided into three coaxial and uniform zones: (1) a 1.4 particle diameter wide, ordered, and loose packing at the column wall (velocity u w ), (2) an intermediate 130 μm wide, random, and dense packing (velocity u i ), and (3) the bulk packing in the center of the column (velocity u c ). First, the validity of this proposed stochastic model was tested by adjusting the predicted to the observed reduced van Deemter plots of a 2.1 mm × 50 mm column packed with 2 μm BEH-C 18 fully porous particles (FPPs). An excellent agreement was found for u i  = 0.93u c , a result fully consistent with the FIB-SEM observation (u i  = 0.95u c ). Next, the model was used to measure u i  = 0.94u c for 2.1 mm × 100 mm column packed with 1.6 μm Cortecs-C 18 superficially porous particles (SPPs). The relative velocity bias across columns packed with SPPs is then barely smaller than that observed in columns packed with FPPs (+6% versus + 7%). u w =1.8u i is measured for a 75 μm × 1 m capillary column packed with 2 μm BEH-C 18 particles. Despite this large wall-to-center velocity bias (+80%), the presence of the thin and ordered wall packing layer has no negative impact on the kinetic performance of capillary columns. Finally, the stochastic model of long-range eddy dispersion explains why analytical (2.1-4.6 mm i.d.) and capillary (columns can all be

  1. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  2. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  3. Stochastic analysis for finance with simulations

    CERN Document Server

    Choe, Geon Ho

    2016-01-01

    This book is an introduction to stochastic analysis and quantitative finance; it includes both theoretical and computational methods. Topics covered are stochastic calculus, option pricing, optimal portfolio investment, and interest rate models. Also included are simulations of stochastic phenomena, numerical solutions of the Black–Scholes–Merton equation, Monte Carlo methods, and time series. Basic measure theory is used as a tool to describe probabilistic phenomena. The level of familiarity with computer programming is kept to a minimum. To make the book accessible to a wider audience, some background mathematical facts are included in the first part of the book and also in the appendices. This work attempts to bridge the gap between mathematics and finance by using diagrams, graphs and simulations in addition to rigorous theoretical exposition. Simulations are not only used as the computational method in quantitative finance, but they can also facilitate an intuitive and deeper understanding of theoret...

  4. Stochastic simulations of the tetracycline operon

    Science.gov (United States)

    2011-01-01

    Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the interplay between its molecular

  5. Stochastic simulations of the tetracycline operon

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2011-01-01

    Full Text Available Abstract Background The tetracycline operon is a self-regulated system. It is found naturally in bacteria where it confers resistance to antibiotic tetracycline. Because of the performance of the molecular elements of the tetracycline operon, these elements are widely used as parts of synthetic gene networks where the protein production can be efficiently turned on and off in response to the presence or the absence of tetracycline. In this paper, we investigate the dynamics of the tetracycline operon. To this end, we develop a mathematical model guided by experimental findings. Our model consists of biochemical reactions that capture the biomolecular interactions of this intriguing system. Having in mind that small biological systems are subjects to stochasticity, we use a stochastic algorithm to simulate the tetracycline operon behavior. A sensitivity analysis of two critical parameters embodied this system is also performed providing a useful understanding of the function of this system. Results Simulations generate a timeline of biomolecular events that confer resistance to bacteria against tetracycline. We monitor the amounts of intracellular TetR2 and TetA proteins, the two important regulatory and resistance molecules, as a function of intrecellular tetracycline. We find that lack of one of the promoters of the tetracycline operon has no influence on the total behavior of this system inferring that this promoter is not essential for Escherichia coli. Sensitivity analysis with respect to the binding strength of tetracycline to repressor and of repressor to operators suggests that these two parameters play a predominant role in the behavior of the system. The results of the simulations agree well with experimental observations such as tight repression, fast gene expression, induction with tetracycline, and small intracellular TetR2 amounts. Conclusions Computer simulations of the tetracycline operon afford augmented insight into the

  6. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  7. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  8. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Conclusion Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  9. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  10. Stochastic airspace simulation tool development

    Science.gov (United States)

    2009-10-01

    Modeling and simulation is often used to study : the physical world when observation may not be : practical. The overall goal of a recent and ongoing : simulation tool project has been to provide a : documented, lifecycle-managed, multi-processor : c...

  11. MCdevelop - a universal framework for Stochastic Simulations

    Science.gov (United States)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http

  12. Stochastic Simulation of Process Calculi for Biology

    Directory of Open Access Journals (Sweden)

    Andrew Phillips

    2010-10-01

    Full Text Available Biological systems typically involve large numbers of components with complex, highly parallel interactions and intrinsic stochasticity. To model this complexity, numerous programming languages based on process calculi have been developed, many of which are expressive enough to generate unbounded numbers of molecular species and reactions. As a result of this expressiveness, such calculi cannot rely on standard reaction-based simulation methods, which require fixed numbers of species and reactions. Rather than implementing custom stochastic simulation algorithms for each process calculus, we propose to use a generic abstract machine that can be instantiated to a range of process calculi and a range of reaction-based simulation algorithms. The abstract machine functions as a just-in-time compiler, which dynamically updates the set of possible reactions and chooses the next reaction in an iterative cycle. In this short paper we give a brief summary of the generic abstract machine, and show how it can be instantiated with the stochastic simulation algorithm known as Gillespie's Direct Method. We also discuss the wider implications of such an abstract machine, and outline how it can be used to simulate multiple calculi simultaneously within a common framework.

  13. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  14. Adaptive hybrid simulations for multiscale stochastic reaction networks

    International Nuclear Information System (INIS)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-01

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest

  15. Efficient stochastic thermostatting of path integral molecular dynamics.

    Science.gov (United States)

    Ceriotti, Michele; Parrinello, Michele; Markland, Thomas E; Manolopoulos, David E

    2010-09-28

    The path integral molecular dynamics (PIMD) method provides a convenient way to compute the quantum mechanical structural and thermodynamic properties of condensed phase systems at the expense of introducing an additional set of high frequency normal modes on top of the physical vibrations of the system. Efficiently sampling such a wide range of frequencies provides a considerable thermostatting challenge. Here we introduce a simple stochastic path integral Langevin equation (PILE) thermostat which exploits an analytic knowledge of the free path integral normal mode frequencies. We also apply a recently developed colored noise thermostat based on a generalized Langevin equation (GLE), which automatically achieves a similar, frequency-optimized sampling. The sampling efficiencies of these thermostats are compared with that of the more conventional Nosé-Hoover chain (NHC) thermostat for a number of physically relevant properties of the liquid water and hydrogen-in-palladium systems. In nearly every case, the new PILE thermostat is found to perform just as well as the NHC thermostat while allowing for a computationally more efficient implementation. The GLE thermostat also proves to be very robust delivering a near-optimum sampling efficiency in all of the cases considered. We suspect that these simple stochastic thermostats will therefore find useful application in many future PIMD simulations.

  16. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB

    KAUST Repository

    Klingbeil, G.

    2011-02-25

    Motivation: The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. Results: The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user\\'s models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. © The Author 2011. Published by Oxford University Press. All rights reserved.

  17. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    Science.gov (United States)

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is

  18. A low-bias simulation scheme for the SABR stochastic volatility model

    NARCIS (Netherlands)

    B. Chen (Bin); C.W. Oosterlee (Cornelis); J.A.M. van der Weide

    2012-01-01

    htmlabstractThe Stochastic Alpha Beta Rho Stochastic Volatility (SABR-SV) model is widely used in the financial industry for the pricing of fixed income instruments. In this paper we develop an lowbias simulation scheme for the SABR-SV model, which deals efficiently with (undesired)

  19. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  20. On the efficiency of stochastic volume sources for the determination of light meson masses

    CERN Document Server

    Endress, E; Wittig, H

    2011-01-01

    We investigate the efficiency of single timeslice stochastic sources for the calculation of light meson masses on the lattice as one varies the quark mass. Simulations are carried out with Nf = 2 flavours of non-perturbatively O(a) improved Wilson fermions for pion masses in the range of 450 - 760 MeV. Results for pseudoscalar and vector meson two-point correlation functions computed using stochastic as well as point sources are presented and compared. At fixed computational cost the stochastic approach reduces the variance considerably in the pseudoscalar channel for all simulated quark masses. The vector channel is more affected by the intrinsic stochastic noise. In order to obtain stable estimates of the statistical errors and a more pronounced plateau for the effective vector meson mass, a relatively large number of stochastic sources must be used.

  1. Measuring of Second-order Stochastic Dominance Portfolio Efficiency

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš

    2010-01-01

    Roč. 46, č. 3 (2010), s. 488-500 ISSN 0023-5954 R&D Projects: GA ČR GAP402/10/1610 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic dominance * stability * SSD porfolio efficiency Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/kopa-measuring of second-order stochastic dominance portfolio efficiency.pdf

  2. Stochastic algorithm for simulating gas transport coefficients

    Science.gov (United States)

    Rudyak, V. Ya.; Lezhnev, E. V.

    2018-02-01

    The aim of this paper is to create a molecular algorithm for modeling the transport processes in gases that will be more efficient than molecular dynamics method. To this end, the dynamics of molecules are modeled stochastically. In a rarefied gas, it is sufficient to consider the evolution of molecules only in the velocity space, whereas for a dense gas it is necessary to model the dynamics of molecules also in the physical space. Adequate integral characteristics of the studied system are obtained by averaging over a sufficiently large number of independent phase trajectories. The efficiency of the proposed algorithm was demonstrated by modeling the coefficients of self-diffusion and the viscosity of several gases. It was shown that the accuracy comparable to the experimental one can be obtained on a relatively small number of molecules. The modeling accuracy increases with the growth of used number of molecules and phase trajectories.

  3. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    International Nuclear Information System (INIS)

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy

  4. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Jin, E-mail: iamfujin@hotmail.com [Department of Computer Science, University of California, Santa Barbara (United States); Wu, Sheng, E-mail: sheng@cs.ucsb.edu [Department of Computer Science, University of California, Santa Barbara (United States); Li, Hong, E-mail: hong.li@teradata.com [Teradata Inc., El Segundo, California (United States); Petzold, Linda R., E-mail: petzold@cs.ucsb.edu [Department of Computer Science, University of California, Santa Barbara (United States)

    2014-10-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.

  5. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  6. Explaining Cost Efficiency of Scottish Farms: A Stochastic Frontier Analysis

    OpenAIRE

    Revoredo-Giha, Cesar; Milne, Catherine E.; Leat, Philip M.K.; Cho, Woong Je

    2006-01-01

    In this paper the cost efficiency of Scottish farms is determined, variables that explain the relative cost efficiency by farm type are identified and implications discussed. A cost efficiency approach was selected as it can deal with farms producing multiple outputs (in contrast to production frontiers), and second because it can accommodate output constraints imposed by the Common Agricultural Policy (CAP). To estimate the stochastic cost frontier, a generalised multi-product translog cost ...

  7. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    Science.gov (United States)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  8. GillesPy: A Python Package for Stochastic Model Building and Simulation.

    Science.gov (United States)

    Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R

    2016-09-01

    GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.

  9. Simulation of the stochastic wave loads using a physical modeling approach

    DEFF Research Database (Denmark)

    Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... an efficient way of simulating realizations of the stochastic load processes. However it requires many random variables, i.e. in the order of magnitude of 1000, to be included in the load model. Unfortunately having too many random variables in the problem makes considerable difficulties in analyzing system...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...

  10. Efficient AM Algorithms for Stochastic ML Estimation of DOA

    Directory of Open Access Journals (Sweden)

    Haihua Chen

    2016-01-01

    Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.

  11. Multinomial tau-leaping method for stochastic kinetic simulations

    Science.gov (United States)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-01

    We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.

  12. Multinomial Tau-Leaping Method for Stochastic Kinetic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Pettigrew, Michel F.; Resat, Haluk

    2007-02-28

    We introduce the multinomial tau-leaping (MtL) method, an improved version of the binomial tau-leaping method, for general reaction networks. Improvements in efficiency are achieved in several ways. Firstly, tau-leaping steps are determined simply and efficiently using a-prior information. Secondly, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant or reaction is found in any other group. Thirdly, product formation is factored into upper bound estimation of the number of times a particular reaction occurs. Together, these features allow for larger time steps where the numbers of reactions occurring simultaneously in a multi-channel manner are estimated accurately using a multinomial distribution. Using a wide range of test case problems of scientific and practical interest involving cellular processes, such as epidermal growth factor receptor signaling and lactose operon model incorporating gene transcription and translation, we show that tau-leaping based methods like the MtL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude. Furthermore, the simultaneous multi-channel representation capability of the MtL algorithm makes it a candidate for FPGA implementation or for parallelization in parallel computing environments.

  13. An Exploration Algorithm for Stochastic Simulators Driven by Energy Gradients

    Directory of Open Access Journals (Sweden)

    Anastasia S. Georgiou

    2017-06-01

    Full Text Available In recent work, we have illustrated the construction of an exploration geometry on free energy surfaces: the adaptive computer-assisted discovery of an approximate low-dimensional manifold on which the effective dynamics of the system evolves. Constructing such an exploration geometry involves geometry-biased sampling (through both appropriately-initialized unbiased molecular dynamics and through restraining potentials and, machine learning techniques to organize the intrinsic geometry of the data resulting from the sampling (in particular, diffusion maps, possibly enhanced through the appropriate Mahalanobis-type metric. In this contribution, we detail a method for exploring the conformational space of a stochastic gradient system whose effective free energy surface depends on a smaller number of degrees of freedom than the dimension of the phase space. Our approach comprises two steps. First, we study the local geometry of the free energy landscape using diffusion maps on samples computed through stochastic dynamics. This allows us to automatically identify the relevant coarse variables. Next, we use the information garnered in the previous step to construct a new set of initial conditions for subsequent trajectories. These initial conditions are computed so as to explore the accessible conformational space more efficiently than by continuing the previous, unbiased simulations. We showcase this method on a representative test system.

  14. Constant-complexity stochastic simulation algorithm with optimal binning

    Energy Technology Data Exchange (ETDEWEB)

    Sanft, Kevin R., E-mail: kevin@kevinsanft.com [Department of Computer Science, University of North Carolina Asheville, Asheville, North Carolina 28804 (United States); Othmer, Hans G., E-mail: othmer@math.umn.edu [School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455 (United States); Digital Technology Center, University of Minnesota, Minneapolis, Minnesota 55455 (United States)

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  15. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    Science.gov (United States)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2017-04-01

    In climate simulations, the impacts of the subgrid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the subgrid variability in a computationally inexpensive manner. This study shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a nonzero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference Williams PD, Howe NJ, Gregory JM, Smith RS, and Joshi MM (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, 29, 8763-8781. http://dx.doi.org/10

  16. Stochastic models to simulate paratuberculosis in dairy herds

    DEFF Research Database (Denmark)

    Nielsen, Søren Saxmose; Weber, M.F.; Kudahl, Anne Margrethe Braad

    2011-01-01

    Stochastic simulation models are widely accepted as a means of assessing the impact of changes in daily management and the control of different diseases, such as paratuberculosis, in dairy herds. This paper summarises and discusses the assumptions of four stochastic simulation models and their use...... the models are somewhat different in their underlying principles and do put slightly different values on the different strategies, their overall findings are similar. Therefore, simulation models may be useful in planning paratuberculosis strategies in dairy herds, although as with all models caution...

  17. MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM

    Directory of Open Access Journals (Sweden)

    LIXIN LIU

    2014-01-01

    Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.

  18. Stochastic simulation of off-shore oil terminal systems

    International Nuclear Information System (INIS)

    Frankel, E.G.; Oberle, J.

    1991-01-01

    To cope with the problem of uncertainty and conditionality in the planning, design, and operation of offshore oil transshipment terminal systems, a conditional stochastic simulation approach is presented. Examples are shown, using SLAM II, a computer simulation language based on GERT, a conditional stochastic network analysis methodology in which use of resources such as time and money are expressed by the moment generating function of the statistics of the resource requirements. Similarly each activity has an associated conditional probability of being performed and/or of requiring some of the resources. The terminal system is realistically represented by modelling the statistics of arrivals, loading and unloading times, uncertainties in costs and availabilities, etc

  19. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  20. Economic Risk Analysis of Agricultural Tillage Systems Using the SMART Stochastic Efficiency Software Package

    Science.gov (United States)

    Recently, a variant of stochastic dominance called stochastic efficiency with respect to a function (SERF) has been developed and applied. Unlike traditional stochastic dominance approaches, SERF uses the concept of certainty equivalents (CEs) to rank a set of risk-efficient alternatives instead of...

  1. DEA-Risk Efficiency and Stochastic Dominance Efficiency of Stock Indices

    Czech Academy of Sciences Publication Activity Database

    Branda, M.; Kopa, Miloš

    2012-01-01

    Roč. 62, č. 2 (2012), s. 106-124 ISSN 0015-1920 R&D Projects: GA ČR GAP402/10/1610 Grant - others:GA ČR(CZ) GAP402/12/0558 Program:GA Institutional research plan: CEZ:AV0Z10750506 Keywords : Data Envelopment Analysis * Risk measures * Index efficiency * Stochastic dominance Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.340, year: 2012 http://library.utia.cas.cz/separaty/2012/E/branda-dea-risk efficiency and stochastic dominance efficiency of stock indices.pdf

  2. Improved operating strategies for uranium extraction: a stochastic simulation

    International Nuclear Information System (INIS)

    Broekman, B.R.

    1986-01-01

    Deterministic and stochastic simulations of a Western Transvaal uranium process are used in this research report to determine more profitable uranium plant operating strategies and to gauge the potential financial benefits of automatic process control. The deterministic simulation model was formulated using empirical and phenomenological process models. The model indicated that profitability increases significantly as the uranium leaching strategy becomes harsher. The stochastic simulation models use process variable distributions corresponding to manually and automatically controlled conditions to investigate the economic gains that may be obtained if a change is made from manual to automatic control of two important process variables. These lognormally distributed variables are the pachuca 1 sulphuric acid concentration and the ferric to ferrous ratio. The stochastic simulations show that automatic process control is justifiable in certain cases. Where the leaching strategy is relatively harsh, such as that in operation during January 1986, it is not possible to justify an automatic control system. Automatic control is, however, justifiable if a relatively mild leaching strategy is adopted. The stochastic and deterministic simulations represent two different approaches to uranium process modelling. This study has indicated the necessity for each approach to be applied in the correct context. It is contended that incorrect conclusions may have been drawn by other investigators in South Africa who failed to consider the two approaches separately

  3. Simulating biological processes: stochastic physics from whole cells to colonies

    Science.gov (United States)

    Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  4. Analysing initial attack on wildland fires using stochastic simulation.

    Science.gov (United States)

    Jeremy S. Fried; J. Keith Gilless; James. Spero

    2006-01-01

    Stochastic simulation models of initial attack on wildland fire can be designed to reflect the complexity of the environmental, administrative, and institutional context in which wildland fire protection agencies operate, but such complexity may come at the cost of a considerable investment in data acquisition and management. This cost may be well justified when it...

  5. Stochastic Simulation Using @ Risk for Dairy Business Investment Decisions

    Science.gov (United States)

    A dynamic, stochastic, mechanistic simulation model of a dairy business was developed to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm system within a partial budgeting fram...

  6. Stochastic simulation using @Risk for dairy business investment decisions

    NARCIS (Netherlands)

    Bewley, J.D.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.

    2010-01-01

    Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm

  7. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  8. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    Energy Technology Data Exchange (ETDEWEB)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir [Faculty of Mathematics, Yazd University, Yazd (Iran, Islamic Republic of); The Laboratory of Quantum Information Processing, Yazd University, Yazd (Iran, Islamic Republic of); Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir [Faculty of Mathematics, Yazd University, Yazd (Iran, Islamic Republic of); The Laboratory of Quantum Information Processing, Yazd University, Yazd (Iran, Islamic Republic of); Cattani, C., E-mail: ccattani@unisa.it [Department of Mathematics, University of Salerno, Via Ponte Don Melillo, 84084 Fisciano (Italy); Maalek Ghaini, F.M., E-mail: maalek@yazd.ac.ir [Faculty of Mathematics, Yazd University, Yazd (Iran, Islamic Republic of); The Laboratory of Quantum Information Processing, Yazd University, Yazd (Iran, Islamic Republic of)

    2015-02-15

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Error analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.

  9. Particle simulation in stochastic magnetic fields at tokamak edge

    Science.gov (United States)

    Chang, C. C.; Nishimura, Y.; Cheng, C. Z.

    2013-10-01

    An orbit following simulation code is developed incorporating magnetic perturbation. While magnetic field lines can exhibit stochastic behavior in the presence of incommensurate magnetic perturbations, the particle motions are also influenced by the mirror force and the perturbed electric fields. Remnants of lowest order magnetic islands can also play an important role in regulating the particle and heat transport. Effective perpendicular transport can be enhanced in the presence of trapped particles; how the mirror force influences the transport in stochastic magnetic fields is examined. This work is supported by National Science Council of Taiwan, NSC 100-2112-M-006-021-MY3 and NCKU Top University Project.

  10. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Science.gov (United States)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  11. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    Science.gov (United States)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction

  12. Stochastic assessment of investment efficiency in a power system

    International Nuclear Information System (INIS)

    Davidov, Sreten; Pantoš, Miloš

    2017-01-01

    The assessment of investment efficiency plays a critical role in investment prioritization in the context of electrical network expansion planning. Hence, this paper proposes new criteria for the cost-efficiency investment applied in the investment ranking process in electrical network planning, based on the assessment of the new investment candidates impact on active-power losses, bus voltages and line loadings in the network. These three general criteria are chosen due to their strong economic influence when the active-power losses and line loadings are considered and due to their significant impact on quality of supply allowed for the voltage profile. Electrical network reliability of supply is not addressed, since, this criterion has already been extensively applied in other solutions regarding investment efficiency assessment. The proposed ranking procedure involves a stochastic approach applying the Monte Carlo method in the scenario preparation. The number of scenarios is further reduced by the K-MEANS procedure in order to speed up the investment efficiency assessment. The proposed ranking procedure is tested using the standard New England test system. The results show that based on the newly involved investment assessment criteria indices, system operators will obtain a prioritized list of investments that will prevent excessive and economically wasteful spending. - Highlights: • Active-Power Loss Investment Efficiency Index LEI. • Voltage Profile Investment Efficiency Index VEI. • Active-Power Flow Loading Mitigation Investment Efficiency Index PEI. • Optimization model for network expansion planning with new indices.

  13. Stochastic Robotic Simulation Tool, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Energid Technologies proposes a game-theory inspired simulation tool for testing and validating robotic lunar and planetary missions. It applies Monte Carlo...

  14. Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems

    KAUST Repository

    Cotter, Simon L.

    2013-01-01

    Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.

  15. Overview of the TurbSim Stochastic Inflow Turbulence Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, N. D.; Jonkman, B. J.

    2005-09-01

    The TurbSim stochastic inflow turbulence code was developed to provide a numerical simulation of a full-field flow that contains coherent turbulence structures that reflect the proper spatiotemporal turbulent velocity field relationships seen in instabilities associated with nocturnal boundary layer flows that are not represented well by the IEC Normal Turbulence Models (NTM). Its purpose is to provide the wind turbine designer with the ability to drive design code (FAST or MSC.ADAMS) simulations of advanced turbine designs with simulated inflow turbulence environments that incorporate many of the important fluid dynamic features known to adversely affect turbine aeroelastic response and loading.

  16. HYDRASTAR - a code for stochastic simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1992-05-01

    The computer code HYDRASTAR was developed as a tool for groundwater flow and transport simulations in the SKB 91 safety analysis project. Its conceptual ideas can be traced back to a report by Shlomo Neuman in 1988, see the reference section. The main idea of the code is the treatment of the rock as a stochastic continuum which separates it from the deterministic methods previously employed by SKB and also from the discrete fracture models. The current report is a comprehensive description of HYDRASTAR including such topics as regularization or upscaling of a hydraulic conductivity field, unconditional and conditional simulation of stochastic processes, numerical solvers for the hydrology and streamline equations and finally some proposals for future developments

  17. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    International Nuclear Information System (INIS)

    Ferm, Lars; Hellander, Andreas; Loetstedt, Per

    2010-01-01

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  18. MOSES: A Matlab-based open-source stochastic epidemic simulator.

    Science.gov (United States)

    Varol, Huseyin Atakan

    2016-08-01

    This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.

  19. GEMFsim: A Stochastic Simulator for the Generalized Epidemic Modeling Framework

    OpenAIRE

    Sahneh, Faryad Darabi; Vajdi, Aram; Shakeri, Heman; Fan, Futing; Scoglio, Caterina

    2016-01-01

    The recently proposed generalized epidemic modeling framework (GEMF) \\cite{sahneh2013generalized} lays the groundwork for systematically constructing a broad spectrum of stochastic spreading processes over complex networks. This article builds an algorithm for exact, continuous-time numerical simulation of GEMF-based processes. Moreover the implementation of this algorithm, GEMFsim, is available in popular scientific programming platforms such as MATLAB, R, Python, and C; GEMFsim facilitates ...

  20. Stochastic simulations of calcium contents in sugarcane area

    Directory of Open Access Journals (Sweden)

    Gener T. Pereira

    2015-08-01

    Full Text Available ABSTRACTThe aim of this study was to quantify and to map the spatial distribution and uncertainty of soil calcium (Ca content in a sugarcane area by sequential Gaussian and simulated-annealing simulation methods. The study was conducted in the municipality of Guariba, northeast of the São Paulo state. A sampling grid with 206 points separated by a distance of 50 m was established, totaling approximately 42 ha. The calcium contents were evaluated in layer of 0-0.20 m. Techniques of geostatistical estimation, ordinary kriging and stochastic simulations were used. The technique of ordinary kriging does not reproduce satisfactorily the global statistics of the Ca contents. The use of simulation techniques allows reproducing the spatial variability pattern of Ca contents. The techniques of sequential Gaussian simulation and simulated annealing showed significant variations in the contents of Ca in the small scale.

  1. Technical Efficiency of Thai Manufacturing SMEs: A Stochastic Frontier Analysis

    Directory of Open Access Journals (Sweden)

    Teerawat Charoenrat

    2013-03-01

    Full Text Available AbstractA major motivation of this study is to examine the factors that are the most important in contributing to the relatively poor efficiency performance of Thai manufacturing small and medium sized enterprises (SMEs. The results obtained will be significant in devising effective policies aimed at tackling this poor performance.This paper uses data on manufacturing SMEs in the North-eastern region of Thailand in 2007 as a case study, by applying a stochastic frontier analysis (SFA and a technical inefficiency effects model. The empirical results obtained indicate that the mean technical efficiency of all categories of manufacturing SMEs in theNorth-eastern region is 43%, implying that manufacturing SMEs have high levels of technical inefficiency in their production processes.Manufacturing SMEs in the North-eastern region are particularly labour-intensive. The empirical results of the technical inefficiency effects model suggest that skilled labour, the municipal area and ownership characteristics are important firm-specific factors affecting technical efficiency. The paper argues that the government should play a more substantial role in developing manufacturing SMEs in the North-eastern provinces through: providing training programs for employees and employers; encouraging a greater usage of capital and technology in the production process of SMEs; enhancing the efficiency of state-ownedenterprises; encouraging a wide range of ownership forms; and improving information and communications infrastructure.

  2. StochPy: a comprehensive, user-friendly tool for simulating stochastic biological processes.

    Directory of Open Access Journals (Sweden)

    Timo R Maarleveld

    Full Text Available Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python, which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.

  3. Stochastic simulation of regional groundwater flow in Beishan area

    International Nuclear Information System (INIS)

    Dong Yanhui; Li Guomin

    2010-01-01

    Because of the hydrogeological complexity, traditional thinking of aquifer characteristics is not appropriate for groundwater system in Beishan area. Uncertainty analysis of groundwater models is needed to examine the hydrologic effects of spatial heterogeneity. In this study, fast Fourier transform spectral method (FFTS) was used to generate the random horizontal permeability parameters. Depth decay and vertical anisotropy of hydraulic conductivity were included to build random permeability models. Based on high-performance computers, hundreds of groundwater flow models were simulated. Through stochastic simulations, the effect of heterogeneity to groundwater flow pattern was analyzed. (authors)

  4. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    Science.gov (United States)

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  5. Hybrid framework for the simulation of stochastic chemical kinetics

    Science.gov (United States)

    Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos

    2016-12-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  6. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  7. Dimension reduction of Karhunen-Loeve expansion for simulation of stochastic processes

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zixin; Peng, Yongbo

    2017-11-01

    Conventional Karhunen-Loeve expansions for simulation of stochastic processes often encounter the challenge of dealing with hundreds of random variables. For breaking through the barrier, a random function embedded Karhunen-Loeve expansion method is proposed in this paper. The updated scheme has a similar form to the conventional Karhunen-Loeve expansion, both involving a summation of a series of deterministic orthonormal basis and uncorrelated random variables. While the difference from the updated scheme lies in the dimension reduction of Karhunen-Loeve expansion through introducing random functions as a conditional constraint upon uncorrelated random variables. The random function is expressed as a single-elementary-random-variable orthogonal function in polynomial format (non-Gaussian variables) or trigonometric format (non-Gaussian and Gaussian variables). For illustrative purposes, the simulation of seismic ground motion is carried out using the updated scheme. Numerical investigations reveal that the Karhunen-Loeve expansion with random functions could gain desirable simulation results in case of a moderate sample number, except the Hermite polynomials and the Laguerre polynomials. It has the sound applicability and efficiency in simulation of stochastic processes. Besides, the updated scheme has the benefit of integrating with probability density evolution method, readily for the stochastic analysis of nonlinear structures.

  8. An efficient forward-reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Vilanova, Pedro

    2016-01-07

    In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  9. Stochastic simulation of ecohydrological interactions between vegetation and groundwater

    Science.gov (United States)

    Dwelle, M. C.; Ivanov, V. Y.; Sargsyan, K.

    2017-12-01

    The complex interactions between groundwater and vegetation in the Amazon rainforest may yield vital ecophysiological interactions in specific landscape niches such as buffering plant water stress during dry season or suppression of water uptake due to anoxic conditions. Representation of such processes is greatly impacted by both external and internal sources of uncertainty: inaccurate data and subjective choice of model representation. The models that can simulate these processes are complex and computationally expensive, and therefore make it difficult to address uncertainty using traditional methods. We use the ecohydrologic model tRIBS+VEGGIE and a novel uncertainty quantification framework applied to the ZF2 watershed near Manaus, Brazil. We showcase the capability of this framework for stochastic simulation of vegetation-hydrology dynamics. This framework is useful for simulation with internal and external stochasticity, but this work will focus on internal variability of groundwater depth distribution and model parameterizations. We demonstrate the capability of this framework to make inferences on uncertain states of groundwater depth from limited in situ data, and how the realizations of these inferences affect the ecohydrological interactions between groundwater dynamics and vegetation function. We place an emphasis on the probabilistic representation of quantities of interest and how this impacts the understanding and interpretation of the dynamics at the groundwater-vegetation interface.

  10. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a

  11. Methodology for Measurement the Energy Efficiency Involving Solar Heating Systems Using Stochastic Modelling

    Directory of Open Access Journals (Sweden)

    Bruno G. Menita

    2017-01-01

    Full Text Available The purpose of the present study is to evaluate gains through measurement and verification methodology adapted from the International Performance Measurement and Verification Protocol, from case studies involving Energy Efficiency Projects in the Goias State, Brazil. This paper also presents the stochastic modelling for the generation of future scenarios of electricity saving resulted by these Energy Efficiency Projects. The model is developed by using the Geometric Brownian Motion Stochastic Process with Mean Reversion associated with the Monte Carlo simulation technique. Results show that the electricity saved from the replacement of electric showers by solar water heating systems in homes of low-income families has great potential to bring financial benefits to such families, and that the reduction in peak demand obtained from this Energy Efficiency Action is advantageous to the Brazilian electrical system. Results contemplate also the future scenarios of electricity saving and a sensitivity analysis in order to verify how values of some parameters influence on the results, once there is no historical data available for obtaining these values.

  12. Stochastic efficiency analysis of bovine tuberculosis-surveillance programs in the Netherlands

    NARCIS (Netherlands)

    Asseldonk, van M.A.P.M.; Roermund, van H.J.W.; Fischer, E.A.J.; Jong, de M.C.M.; Huirne, R.B.M.

    2005-01-01

    We constructed a stochastic bio-economic model to determine the optimal cost-efficient surveillance program for bovine tuberculosis. The surveillance programs differed in combinations of one or more detection methods and/or sampling frequency. Stochastic input variables in the epidemiological module

  13. Incorporating extrinsic noise into the stochastic simulation of biochemical reactions: A comparison of approaches

    Science.gov (United States)

    Thanh, Vo Hong; Marchetti, Luca; Reali, Federico; Priami, Corrado

    2018-02-01

    The stochastic simulation algorithm (SSA) has been widely used for simulating biochemical reaction networks. SSA is able to capture the inherently intrinsic noise of the biological system, which is due to the discreteness of species population and to the randomness of their reciprocal interactions. However, SSA does not consider other sources of heterogeneity in biochemical reaction systems, which are referred to as extrinsic noise. Here, we extend two simulation approaches, namely, the integration-based method and the rejection-based method, to take extrinsic noise into account by allowing the reaction propensities to vary in time and state dependent manner. For both methods, new efficient implementations are introduced and their efficiency and applicability to biological models are investigated. Our numerical results suggest that the rejection-based method performs better than the integration-based method when the extrinsic noise is considered.

  14. Pareto Optimal Solutions for Stochastic Dynamic Programming Problems via Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    R. T. N. Cardoso

    2013-01-01

    Full Text Available A heuristic algorithm is proposed for a class of stochastic discrete-time continuous-variable dynamic programming problems submitted to non-Gaussian disturbances. Instead of using the expected values of the objective function, the randomness nature of the decision variables is kept along the process, while Pareto fronts weighted by all quantiles of the objective function are determined. Thus, decision makers are able to choose any quantile they wish. This new idea is carried out by using Monte Carlo simulations embedded in an approximate algorithm proposed to deterministic dynamic programming problems. The new method is tested in instances of the classical inventory control problem. The results obtained attest for the efficiency and efficacy of the algorithm in solving these important stochastic optimization problems.

  15. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks

    Science.gov (United States)

    Rathinam, Muruhan; Sheppard, Patrick W.; Khammash, Mustafa

    2010-01-01

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10 000 are demonstrated.

  16. Efficient simulation of semiflexible polymers

    NARCIS (Netherlands)

    Panja, Deb; Barkema, Gerard T.; van Leeuwen, J. M. J.

    2015-01-01

    Using a recently developed bead-spring model for semiflexible polymers that takes into account their natural extensibility, we report an efficient algorithm to simulate the dynamics for polymers like double-stranded DNA (dsDNA) in the absence of hydrodynamic interactions. The dsDNA is modeled with

  17. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  18. Stochastic simulation of chemically reacting systems using multi-core processors.

    Science.gov (United States)

    Gillespie, Colin S

    2012-01-07

    In recent years, computer simulations have become increasingly useful when trying to understand the complex dynamics of biochemical networks, particularly in stochastic systems. In such situations stochastic simulation is vital in gaining an understanding of the inherent stochasticity present, as these models are rarely analytically tractable. However, a stochastic approach can be computationally prohibitive for many models. A number of approximations have been proposed that aim to speed up stochastic simulations. However, the majority of these approaches are fundamentally serial in terms of central processing unit (CPU) usage. In this paper, we propose a novel simulation algorithm that utilises the potential of multi-core machines. This algorithm partitions the model into smaller sub-models. These sub-models are then simulated, in parallel, on separate CPUs. We demonstrate that this method is accurate and can speed-up the simulation by a factor proportional to the number of processors available.

  19. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    Science.gov (United States)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of

  20. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    Science.gov (United States)

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  1. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    Science.gov (United States)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  2. URDME: a modular framework for stochastic simulation of reaction-transport processes in complex geometries

    Directory of Open Access Journals (Sweden)

    Drawert Brian

    2012-06-01

    Full Text Available Abstract Background Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. Results We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods

  3. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  4. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  5. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  6. Stochastic Frontier Estimation of Efficient Learning in Video Games

    Science.gov (United States)

    Hamlen, Karla R.

    2012-01-01

    Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…

  7. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  8. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  9. Rejection-free stochastic simulation of BNGL-encoded models

    Energy Technology Data Exchange (ETDEWEB)

    Hlavacek, William S [Los Alamos National Laboratory; Monine, Michael I [Los Alamos National Laboratory; Colvin, Joshua [TRANSLATIONAL GENOM; Posner, Richard G [NORTHERN ARIZONA UNIV.; Von Hoff, Daniel D [TRANSLATIONAL GENOMICS RESEARCH INSTIT.

    2009-01-01

    Formal rules encoded using the BioNetGen language (BNGL) can be used to represent the system-level dynamics of molecular interactions. Rules allow one to compactly and implicitly specify the reaction network implied by a set of molecules and their interactions. Typically, the reaction network implied by a set of rules is large, which makes generation of the underlying rule-defined network expensive. Moreover, the cost of conventional simulation methods typically depends on network size. Together these factors have limited application of the rule-based modeling approach. To overcome this limitation, several methods have recently been developed for determining the reaction dynamics implied by rules while avoiding the expensive step of network generation. The cost of these 'network-free' simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is needed for the analysis of rule-based models of biochemical systems. Here, we present a software tool called RuleMonkey that implements a network-free stochastic simulation method for rule-based models. The method is rejection free, unlike other network-free methods that introduce null events (i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated), and the software is capable of simulating models encoded in BNGL, a general-purpose model-specification language. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant general-purpose simulator for rule-based models, as well as various problem-specific codes that implement network-free simulation methods. RuleMonkey enables the simulation of models defined by rule sets that imply large-scale reaction networks. It is faster than DYNSTOC for stiff problems, although it requires the use of more computer memory. RuleMonkey is freely available for non-commercial use as a stand

  10. Stochastic Modeling and Simulation of Marginally Trapped Neutrons

    Science.gov (United States)

    Coakley, K. J.

    2014-03-01

    For a magnetic trapping experiment, I present an efficient method for simulating experimental β-decay rates that accounts for loss of marginally trapped neutrons due to wall collisions and other possible loss mechanisms. Monte Carlo estimates of time-dependent survival probability functions for the wall loss mechanism are based on computer intensive tracking of marginally trapped neutrons with a symplectic integration method and a physical model for the loss probability of a neutron when it collides with a trap boundary. The simulation is highly efficient because after all relevant survival probabilities are determined, observed neutron decay events are quickly simulated by sampling from probability distribution functions associated with each survival probability function of interest. That is, computer intensive and time-consuming numerical simulation of a large number of additional neutron trajectories is not necessary.

  11. Encoding efficiency of suprathreshold stochastic resonance on stimulus-specific information

    International Nuclear Information System (INIS)

    Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek

    2016-01-01

    In this paper, we evaluate the encoding efficiency of suprathreshold stochastic resonance (SSR) based on a local information-theoretic measure of stimulus-specific information (SSI), which is the average specific information of responses associated with a particular stimulus. The theoretical and numerical analyses of SSIs reveal that noise can improve neuronal coding efficiency for a large population of neurons, which leads to produce increased information-rich responses. The SSI measure, in contrast to the global measure of average mutual information, can characterize the noise benefits in finer detail for describing the enhancement of neuronal encoding efficiency of a particular stimulus, which may be of general utility in the design and implementation of a SSR coding scheme. - Highlights: • Evaluating the noise-enhanced encoding efficiency via stimulus-specific information. • New form of stochastic resonance based on the measure of encoding efficiency. • Analyzing neural encoding schemes from suprathreshold stochastic resonance detailedly.

  12. Revisions to some parameters used in stochastic-method simulations of ground motion

    Science.gov (United States)

    Boore, David; Thompson, Eric M.

    2015-01-01

    The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.

  13. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  14. Stochastic simulations of normal aging and Werner's syndrome.

    KAUST Repository

    Qi, Qi

    2014-04-26

    Human cells typically consist of 23 pairs of chromosomes. Telomeres are repetitive sequences of DNA located at the ends of chromosomes. During cell replication, a number of basepairs are lost from the end of the chromosome and this shortening restricts the number of divisions that a cell can complete before it becomes senescent, or non-replicative. In this paper, we use Monte Carlo simulations to form a stochastic model of telomere shortening to investigate how telomere shortening affects normal aging. Using this model, we study various hypotheses for the way in which shortening occurs by comparing their impact on aging at the chromosome and cell levels. We consider different types of length-dependent loss and replication probabilities to describe these processes. After analyzing a simple model for a population of independent chromosomes, we simulate a population of cells in which each cell has 46 chromosomes and the shortest telomere governs the replicative potential of the cell. We generalize these simulations to Werner\\'s syndrome, a condition in which large sections of DNA are removed during cell division and, amongst other conditions, results in rapid aging. Since the mechanisms governing the loss of additional basepairs are not known, we use our model to simulate a variety of possible forms for the rate at which additional telomeres are lost per replication and several expressions for how the probability of cell division depends on telomere length. As well as the evolution of the mean telomere length, we consider the standard deviation and the shape of the distribution. We compare our results with a variety of data from the literature, covering both experimental data and previous models. We find good agreement for the evolution of telomere length when plotted against population doubling.

  15. Stochastic Rotation Dynamics simulations of wetting multi-phase flows

    Science.gov (United States)

    Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin

    2016-06-01

    Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.

  16. Real option valuation of power transmission investments by stochastic simulation

    International Nuclear Information System (INIS)

    Pringles, Rolando; Olsina, Fernando; Garcés, Francisco

    2015-01-01

    Network expansions in power markets usually lead to investment decisions subject to substantial irreversibility and uncertainty. Hence, investors need valuing the flexibility to change decisions as uncertainty unfolds progressively. Real option analysis is an advanced valuation technique that enables planners to take advantage of market opportunities while preventing or mitigating losses if future conditions evolve unfavorably. In the past, many approaches for valuing real options have been developed. However, applying these methods to value transmission projects is often inappropriate as revenue cash flows are path-dependent and affected by a myriad of uncertain variables. In this work, a valuation technique based on stochastic simulation and recursive dynamic programming, called Least-Square Monte Carlo, is applied to properly value the deferral option in a transmission investment. The effect of option's maturity, the initial outlay and the capital cost upon the value of the postponement option is investigated. Finally, sensitivity analysis determines optimal decision regions to execute, postpone or reject the investment projects. - Highlights: • A modern investment appraisal method is applied to value power transmission projects. • The value of the option to postpone decision to invest in transmission projects is assessed. • Simulation methods are best suited for valuing real options in transmission investments

  17. Stochastic simulation modeling to determine time to detect Bovine Viral Diarrhea antibodies in bulk tank milk

    DEFF Research Database (Denmark)

    Foddai, Alessandro; Enøe, Claes; Krogh, Kaspar

    2014-01-01

    A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence of seroco......A stochastic simulation model was developed to estimate the time from introduction ofBovine Viral Diarrhea Virus (BVDV) in a herd to detection of antibodies in bulk tank milk(BTM) samples using three ELISAs. We assumed that antibodies could be detected, after afixed threshold prevalence...... of seroconverted milking cows was reached in the herd. Differentthresholds were set for each ELISA, according to previous studies. For each test, antibodydetection was simulated in small (70 cows), medium (150 cows) and large (320 cows)herds. The assays included were: (1) the Danish blocking ELISA, (2......, which was the most efficient ELISA, could detect antibodiesin the BTM of a large herd 280 days (95% prediction interval: 218; 568) after a transientlyinfected (TI) milking cow has been introduced into the herd. The estimated time to detectionafter introduction of one PI calf was 111 days (44; 605...

  18. Concordance measures and second order stochastic dominance-portfolio efficiency analysis

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš; Tichý, T.

    2012-01-01

    Roč. 15, č. 4 (2012), s. 110-120 ISSN 1212-3609 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : dependency * concordance * portfolio selection * second order stochastic dominance Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.633, year: 2012 http://library.utia.cas.cz/separaty/2013/E/kopa-concordance measures and second order stochastic dominance- portfolio efficiency analysis.pdf

  19. Efficient simulation of EUV pellicles

    Science.gov (United States)

    Evanschitzky, P.; Erdmann, A.

    2017-10-01

    The paper presents a new simulation model for the efficient simulation of EUV pellicles. Different pellicle stacks, pellicle deformations and particles on the pellicle can be considered. The model is based on properly designed pupil filters representing all pellicle and particle properties to be investigated. The filters are combined with an adapted image simulation. Due to the double transition of the EUV light through the pellicle, two different models for the pupil filter computation have been developed: A model for the forward light propagation from the source to the mask and a model for the backward light propagation from the mask to the entrance pupil of the system. Furthermore, for the accurate representation of particles on the pellicle, a model has been developed, which is able to combine the different size dimensions of particles, of the entrance pupil and of the illumination source. Finally, some specific assumptions on the light propagation make the pellicle model independent from the illumination source and speed up the simulations significantly without introducing an important error. Typically, the consideration of a pellicle increases the overall image simulation time only by a few seconds on a standard personal computer. Furthermore, a simulation study on the printing impact of pellicles on lithographic performance data of a high NA anamorphic EUV system is presented. Typical illumination conditions, a typical mask stack and different mask line features are considered in the study. The general impact of a pellicle as well as the impact of pellicle transmission variations, of pellicle deformations and of particles on the pellicle on typical lithographic performance criteria is investigated.

  20. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)

    2011-07-01

    This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)

  1. Simulation of nuclear plant operation into a stochastic energy production model

    International Nuclear Information System (INIS)

    Pacheco, R.L.

    1983-04-01

    A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt

  2. Transport in two-dimensional scattering stochastic media: Simulations and models

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.; Thieberger, R.

    1999-01-01

    Classical monoenergetic transport of neutral particles in a binary, scattering, two-dimensional stochastic media is discussed. The work focuses on the effective representation of the stochastic media, as obtained by averaging over an ensemble of random realizations of the media. Results of transport simulations in two-dimensional stochastic media are presented and compared against results from several models. Problems for which this work is relevant range from transport through cracked or porous concrete shields and transport through boiling coolant of a nuclear reactor, to transport through stochastic stellar atmospheres

  3. Economic Reforms and Cost Efficiency of Coffee Farmers in Central Kenya: A Stochastic-Translog Approach

    NARCIS (Netherlands)

    Karanja, A.M.; Kuyvenhoven, A.; Moll, H.A.J.

    2007-01-01

    Work reported in this paper analyses the cost efficiency levels of small-holder coffee farmers in four districts in Central Province, Kenya. The level of efficiency is analysed using a stochastic cost frontier model based on household cross-sectional data collected in 1999 and 2000. The 200 surveyed

  4. Stochastic differential equations and numerical simulation for pedestrians

    Energy Technology Data Exchange (ETDEWEB)

    Garrison, J.C.

    1993-07-27

    The mathematical foundation of the Ito interpretation of stochastic ordinary and partial differential equations is briefly explained. This provides the basis for a review of simple difference approximations to stochastic differential equations. An example arising in the theory of optical switching is discussed.

  5. Stochastic Order Redshift Technique (SORT): a simple, efficient and robust method to improve cosmological redshift measurements

    Science.gov (United States)

    Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.

    2018-01-01

    We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.

  6. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  7. Database of Nucleon-Nucleon Scattering Cross Sections by Stochastic Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A database of nucleon-nucleon elastic differential and total cross sections will be generated by stochastic simulation of the quantum Liouville equation in the...

  8. Cyto-Sim: a formal language model and stochastic simulator of membrane-enclosed biochemical processes.

    Science.gov (United States)

    Sedwards, Sean; Mazza, Tommaso

    2007-10-15

    Compartments and membranes are the basis of cell topology and more than 30% of the human genome codes for membrane proteins. While it is possible to represent compartments and membrane proteins in a nominal way with many mathematical formalisms used in systems biology, few, if any, explicitly model the topology of the membranes themselves. Discrete stochastic simulation potentially offers the most accurate representation of cell dynamics. Since the details of every molecular interaction in a pathway are often not known, the relationship between chemical species in not necessarily best described at the lowest level, i.e. by mass action. Simulation is a form of computer-aided analysis, relying on human interpretation to derive meaning. To improve efficiency and gain meaning in an automatic way, it is necessary to have a formalism based on a model which has decidable properties. We present Cyto-Sim, a stochastic simulator of membrane-enclosed hierarchies of biochemical processes, where the membranes comprise an inner, outer and integral layer. The underlying model is based on formal language theory and has been shown to have decidable properties (Cavaliere and Sedwards, 2006), allowing formal analysis in addition to simulation. The simulator provides variable levels of abstraction via arbitrary chemical kinetics which link to ordinary differential equations. In addition to its compact native syntax, Cyto-Sim currently supports models described as Petri nets, can import all versions of SBML and can export SBML and MATLAB m-files. Cyto-Sim is available free, either as an applet or a stand-alone Java program via the web page (http://www.cosbi.eu/Rpty_Soft_CytoSim.php). Other versions can be made available upon request.

  9. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    Science.gov (United States)

    Ehlert, Kurt; Loewe, Laurence

    2014-11-01

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected "hubs" such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present "Lazy Updating," an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.

  10. Project Evaluation and Cash Flow Forecasting by Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1983-10-01

    Full Text Available The net present value of a discounted cash flow is used to evaluate projects. It is shown that the LaPlace transform of the cash flow time function is particularly useful when the cash flow profiles may be approximately described by ordinary linear differential equations in time. However, real cash flows are stochastic variables due to the stochastic nature of the disturbances during production.

  11. An efficient Lagrangian stochastic model of vertical dispersion in the convective boundary layer

    Science.gov (United States)

    Franzese, Pasquale; Luhar, Ashok K.; Borgas, Michael S.

    We consider the one-dimensional case of vertical dispersion in the convective boundary layer (CBL) assuming that the turbulence field is stationary and horizontally homogeneous. The dispersion process is simulated by following Lagrangian trajectories of many independent tracer particles in the turbulent flow field, leading to a prediction of the mean concentration. The particle acceleration is determined using a stochastic differential equation, assuming that the joint evolution of the particle velocity and position is a Markov process. The equation consists of a deterministic term and a random term. While the formulation is standard, attention has been focused in recent years on various ways of calculating the deterministic term using the well-mixed condition incorporating the Fokker-Planck equation. Here we propose a simple parameterisation for the deterministic acceleration term by approximating it as a quadratic function of velocity. Such a function is shown to represent well the acceleration under moderate velocity skewness conditions observed in the CBL. The coefficients in the quadratic form are determined in terms of given turbulence statistics by directly integrating the Fokker-Planck equation. An advantage of this approach is that, unlike in existing Lagrangian stochastic models for the CBL, the use of the turbulence statistics up to the fourth order can be made without assuming any predefined form for the probability distribution function (PDF) of the velocity. The main strength of the model, however, lies in its simplicity and computational efficiency. The dispersion results obtained from the new model are compared with existing laboratory data as well as with those obtained from a more complex Lagrangian model in which the deterministic acceleration term is based on a bi-Gaussian velocity PDF. The comparison shows that the new model performs well.

  12. A Simulation-and-Regression Approach for Stochastic Dynamic Programs with Endogenous State Variables

    DEFF Research Database (Denmark)

    Denault, Michel; Simonato, Jean-Guy; Stentoft, Lars

    2013-01-01

    We investigate the optimum control of a stochastic system, in the presence of both exogenous (control-independent) stochastic state variables and endogenous (control-dependent) state variables. Our solution approach relies on simulations and regressions with respect to the state variables, but also...... grafts the endogenous state variable into the simulation paths. That is, unlike most other simulation approaches found in the literature, no discretization of the endogenous variable is required. The approach is meant to handle several stochastic variables, offers a high level of flexibility...... in their modeling, and should be at its best in non time-homogenous cases, when the optimal policy structure changes with time. We provide numerical results for a dam-based hydropower application, where the exogenous variable is the stochastic spot price of power, and the endogenous variable is the water level...

  13. Productive efficiency of tea industry: A stochastic frontier approach ...

    African Journals Online (AJOL)

    In an economy where recourses are scarce and opportunities for a new technology are lacking, studies will be able to show the possibility of raising productivity by improving the industry's efficiency. This study attempts to measure the status of technical efficiency of tea-producing industry for panel data in Bangladesh using ...

  14. An application of almost marginal conditional stochastic dominance (AMCSD) on forming efficient portfolios

    Science.gov (United States)

    Slamet, Isnandar; Mardiana Putri Carissa, Siska; Pratiwi, Hasih

    2017-10-01

    Investors always seek an efficient portfolio which is a portfolio that has a maximum return on specific risk or minimal risk on specific return. Almost marginal conditional stochastic dominance (AMCSD) criteria can be used to form the efficient portfolio. The aim of this research is to apply the AMCSD criteria to form an efficient portfolio of bank shares listed in the LQ-45. This criteria is used when there are areas that do not meet the criteria of marginal conditional stochastic dominance (MCSD). On the other words, this criteria can be derived from quotient of areas that violate the MCSD criteria with the area that violate and not violate the MCSD criteria. Based on the data bank stocks listed on LQ-45, it can be stated that there are 38 efficient portfolios of 420 portfolios where each portfolio comprises of 4 stocks and 315 efficient portfolios of 1710 portfolios with each of portfolio has 3 stocks.

  15. A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Tae-Hyuk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division; Sandu, Adrian [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Watson, Layne T. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science and Mathematics, Aerospace and Ocean Engineering; Shaffer, Clifford A. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Cao, Yang [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Baumann, William T. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Electrical and Computer Engineering

    2015-08-01

    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, and where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.

  16. Unexpected sites of efficient stochastic acceleration in the inner heliosheath

    Directory of Open Access Journals (Sweden)

    S. V. Chalov

    2007-03-01

    Full Text Available Up until the recent past, it was generally believed that the solar wind termination shock (TS is the favourite site to accelerate ions from the keV- to the MeV- energy levels by means of Fermi-1 processes. When Voyager 1 was crossing the TS at the end of 2004, the registrations of this spacecraft showed, however, that beyond the shock passage fluxes of anomalous cosmic rays kept increasing with time. This obviously called for an acceleration site further downstream of the shock in the heliosheath which had not been identified before. In this paper we thus investigate the process of energy diffusion due to wave-particle interactions (Fermi-2 operating on pick-up ions which are convected downstream of the TS with the subsonic solar wind. We investigate the continuous effect of stochastic acceleration processes suffered by pick-up ions at their interaction with heliosheath turbulences, while they are slowly convected with the subsonic solar wind towards the heliotail. As we can show, the inner heliosheath region, with an extent of about 100 AU around the solar wind stagnation point, is specifically favourable for the energy processing of pick-up ions by Fermi-2 processes up to MeV energies. In addition, we claim that this region is the origin of multiply-charged anomalous cosmic ray particles that have been registered in recent times.

  17. An efficient scenario-based stochastic programming framework for multi-objective optimal micro-grid operation

    International Nuclear Information System (INIS)

    Niknam, Taher; Azizipanah-Abarghooee, Rasoul; Narimani, Mohammad Rasoul

    2012-01-01

    Highlights: ► Proposes a stochastic model for optimal energy management. ► Consider uncertainties related to the forecasted values for load demand. ► Consider uncertainties of forecasted values of output power of wind and photovoltaic units. ► Consider uncertainties of forecasted values of market price. ► Present an improved multi-objective teaching–learning-based optimization. -- Abstract: This paper proposes a stochastic model for optimal energy management with the goal of cost and emission minimization. In this model, the uncertainties related to the forecasted values for load demand, available output power of wind and photovoltaic units and market price are modeled by a scenario-based stochastic programming. In the presented method, scenarios are generated by a roulette wheel mechanism based on probability distribution functions of the input random variables. Through this method, the inherent stochastic nature of the proposed problem is released and the problem is decomposed into a deterministic problem. An improved multi-objective teaching–learning-based optimization is implemented to yield the best expected Pareto optimal front. In the proposed stochastic optimization method, a novel self adaptive probabilistic modification strategy is offered to improve the performance of the presented algorithm. Also, a set of non-dominated solutions are stored in a repository during the simulation process. Meanwhile, the size of the repository is controlled by usage of a fuzzy-based clustering technique. The best expected compromise solution stored in the repository is selected via the niching mechanism in a way that solutions are encouraged to seek the lesser explored regions. The proposed framework is applied in a typical grid-connected micro grid in order to verify its efficiency and feasibility.

  18. Stochastic modelling to evaluate the economic efficiency of treatment of chronic subclinical mastitis

    NARCIS (Netherlands)

    Steeneveld, W.; Hogeveen, H.; Borne, van den B.H.P.; Swinkels, J.M.

    2006-01-01

    Treatment of subclinical mastitis is traditionally no common practice. However, some veterinarians regard treatment of some types of subclinical mastitis to be effective. The goal of this research was to develop a stochastic Monte Carlo simulation model to support decisions around treatment of

  19. Simulating efficiently the evolution of DNA sequences.

    Science.gov (United States)

    Schöniger, M; von Haeseler, A

    1995-02-01

    Two menu-driven FORTRAN programs are described that simulate the evolution of DNA sequences in accordance with a user-specified model. This general stochastic model allows for an arbitrary stationary nucleotide composition and any transition-transversion bias during the process of base substitution. In addition, the user may define any hypothetical model tree according to which a family of sequences evolves. The programs suggest the computationally most inexpensive approach to generate nucleotide substitutions. Either reproducible or non-repeatable simulations, depending on the method of initializing the pseudo-random number generator, can be performed. The corresponding options are offered by the interface menu.

  20. Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach

    OpenAIRE

    Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard

    2013-01-01

    The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...

  1. Measuring efficiency of governmental hospitals in Palestine using stochastic frontier analysis

    OpenAIRE

    Hamidi, Samer

    2016-01-01

    Background The Palestinian government has been under increasing pressure to improve provision of health services while seeking to effectively employ its scare resources. Governmental hospitals remain the leading costly units as they consume about 60?% of governmental health budget. A clearer understanding of the technical efficiency of hospitals is crucial to shape future health policy reforms. In this paper, we used stochastic frontier analysis to measure technical efficiency of governmental...

  2. A primer on stochastic epidemic models: Formulation, numerical simulation, and analysis

    Directory of Open Access Journals (Sweden)

    Linda J.S. Allen

    2017-05-01

    Full Text Available Some mathematical methods for formulation and numerical simulation of stochastic epidemic models are presented. Specifically, models are formulated for continuous-time Markov chains and stochastic differential equations. Some well-known examples are used for illustration such as an SIR epidemic model and a host-vector malaria model. Analytical methods for approximating the probability of a disease outbreak are also discussed. Keywords: Branching process, Continuous-time Markov chain, Minor outbreak, Stochastic differential equation, 2000 MSC: 60H10, 60J28, 92D30

  3. Productive efficiency of tea industry: A stochastic frontier approach

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... efficiency improvement, thereby reducing the cost of production. The study ..... fixed area under tea is used in this study. Labor (L). The number of employees directly or indirectly in production is used in this study as a labor input. It covers all .... direct effects of area, labor, square terms or second order.

  4. Stochastic simulation of acoustic communication in turbulent shallow water

    DEFF Research Database (Denmark)

    Bjerrum-Niese, Christian; Lutzen, R.

    2000-01-01

    This paper presents a stochastic model of a turbulent shallow-water acoustic channel. The model utilizes a Monte Carlo realization method to predict signal transmission conditions. The main output from the model are statistical descriptions of the signal-to-multipath ratio (SMR) and signal fading...

  5. Some simulation aspects, from molecular systems to stochastic geometries of pebble bed reactors

    International Nuclear Information System (INIS)

    Mazzolo, A.

    2009-06-01

    After a brief presentation of his teaching and supervising activities, the author gives an overview of his research activities: investigation of atoms under high intensity magnetic field (investigation of the electronic structure under these fields), studies of theoretical and numerical electrochemistry (simulation coupling molecular dynamics and quantum calculations, comprehensive simulations of molecular dynamics), and studies relating stochastic geometry and neutron science

  6. Spatially explicit and stochastic simulation of forest landscape fire disturbance and succession

    Science.gov (United States)

    Hong S. He; David J. Mladenoff

    1999-01-01

    Understanding disturbance and recovery of forest landscapes is a challenge because of complex interactions over a range of temporal and spatial scales. Landscape simulation models offer an approach to studying such systems at broad scales. Fire can be simulated spatially using mechanistic or stochastic approaches. We describe the fire module in a spatially explicit,...

  7. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    Science.gov (United States)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  8. Efficient simulator training : beyond fidelity

    NARCIS (Netherlands)

    Emmerik, M.L. van; Rooij, J.C.G.M. van

    1999-01-01

    In the present paper it is argued that research on training simulators pays too little attention to didactic aspects of training. Instead, much emphasis is laid on fidelity, which is a major cost driver in simulation. It is forgotten, however, that it actually is the training programme that largely

  9. Stochastic linear multistep methods for the simulation of chemical kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Barrio, Manuel, E-mail: mbarrio@infor.uva.es [Departamento de Informática, University of Valladolid, Valladolid (Spain); Burrage, Kevin [Department of Computer Science, University of Oxford, Oxford (United Kingdom); School of Mathematical Sciences, Queensland University of Technology, Brisbane (Australia); Burrage, Pamela [School of Mathematical Sciences, Queensland University of Technology, Brisbane (Australia)

    2015-02-14

    In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the τ-leaping framework to past information. Using the Θ-trapezoidal τ-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k ≥ 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.

  10. Symplectic structure of quantum phase and stochastic simulation of qubits

    International Nuclear Information System (INIS)

    Rybakov, Yu.P.; Kamalov, T.F.

    2009-01-01

    Starting from the projective interpretation of the Hilbert space, a special stochastic representation of the wave function in Quantum Mechanics (QM), based on soliton realization of extended particles, is considered with the aim to model quantum states via classical computer. Entangled solitons construction having been earlier introduced in the nonlinear spinor field model for the calculation of the Einstein-Podolsky-Rosen (EPR) spin correlation for the spin-1/2 particles in the singlet state, another example is now studied. The latter concerns the entangled envelope solitons in Kerr dielectric with cubic nonlinearity, where we use two-soliton configurations for modeling the entangled states of photons. Finally, the concept of stochastic qubits is used for quantum computing modeling

  11. Deterministic and stochastic population-level simulations of an artificial lac operon genetic network

    Directory of Open Access Journals (Sweden)

    Zygourakis Kyriacos

    2011-07-01

    Full Text Available Abstract Background The lac operon genetic switch is considered as a paradigm of genetic regulation. This system has a positive feedback loop due to the LacY permease boosting its own production by the facilitated transport of inducer into the cell and the subsequent de-repression of the lac operon genes. Previously, we have investigated the effect of stochasticity in an artificial lac operon network at the single cell level by comparing corresponding deterministic and stochastic kinetic models. Results This work focuses on the dynamics of cell populations by incorporating the above kinetic scheme into two Monte Carlo (MC simulation frameworks. The first MC framework assumes stochastic reaction occurrence, accounts for stochastic DNA duplication, division and partitioning and tracks all daughter cells to obtain the statistics of the entire cell population. In order to better understand how stochastic effects shape cell population distributions, we develop a second framework that assumes deterministic reaction dynamics. By comparing the predictions of the two frameworks, we conclude that stochasticity can create or destroy bimodality, and may enhance phenotypic heterogeneity. Conclusions Our results show how various sources of stochasticity act in synergy with the positive feedback architecture, thereby shaping the behavior at the cell population level. Further, the insights obtained from the present study allow us to construct simpler and less computationally intensive models that can closely approximate the dynamics of heterogeneous cell populations.

  12. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    KAUST Repository

    Navarro, María

    2016-12-26

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  13. Nonequilibrium Enhances Adaptation Efficiency of Stochastic Biochemical Systems.

    Directory of Open Access Journals (Sweden)

    Chen Jia

    Full Text Available Adaptation is a crucial biological function possessed by many sensory systems. Early work has shown that some influential equilibrium models can achieve accurate adaptation. However, recent studies indicate that there are close relationships between adaptation and nonequilibrium. In this paper, we provide an explanation of these two seemingly contradictory results based on Markov models with relatively simple networks. We show that as the nonequilibrium driving becomes stronger, the system under consideration will undergo a phase transition along a fixed direction: from non-adaptation to simple adaptation then to oscillatory adaptation, while the transition in the opposite direction is forbidden. This indicates that although adaptation may be observed in equilibrium systems, it tends to occur in systems far away from equilibrium. In addition, we find that nonequilibrium will improve the performance of adaptation by enhancing the adaptation efficiency. All these results provide a deeper insight into the connection between adaptation and nonequilibrium. Finally, we use a more complicated network model of bacterial chemotaxis to validate the main results of this paper.

  14. US residential energy demand and energy efficiency: A stochastic demand frontier approach

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.

    2012-01-01

    This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.

  15. A higher-order numerical framework for stochastic simulation of chemical reaction systems.

    KAUST Repository

    Székely, Tamás

    2012-07-15

    BACKGROUND: In this paper, we present a framework for improving the accuracy of fixed-step methods for Monte Carlo simulation of discrete stochastic chemical kinetics. Stochasticity is ubiquitous in many areas of cell biology, for example in gene regulation, biochemical cascades and cell-cell interaction. However most discrete stochastic simulation techniques are slow. We apply Richardson extrapolation to the moments of three fixed-step methods, the Euler, midpoint and θ-trapezoidal τ-leap methods, to demonstrate the power of stochastic extrapolation. The extrapolation framework can increase the order of convergence of any fixed-step discrete stochastic solver and is very easy to implement; the only condition for its use is knowledge of the appropriate terms of the global error expansion of the solver in terms of its stepsize. In practical terms, a higher-order method with a larger stepsize can achieve the same level of accuracy as a lower-order method with a smaller one, potentially reducing the computational time of the system. RESULTS: By obtaining a global error expansion for a general weak first-order method, we prove that extrapolation can increase the weak order of convergence for the moments of the Euler and the midpoint τ-leap methods, from one to two. This is supported by numerical simulations of several chemical systems of biological importance using the Euler, midpoint and θ-trapezoidal τ-leap methods. In almost all cases, extrapolation results in an improvement of accuracy. As in the case of ordinary and stochastic differential equations, extrapolation can be repeated to obtain even higher-order approximations. CONCLUSIONS: Extrapolation is a general framework for increasing the order of accuracy of any fixed-step stochastic solver. This enables the simulation of complicated systems in less time, allowing for more realistic biochemical problems to be solved.

  16. DEA models equivalent to general Nth order stochastic dominance efficiency tests

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin; Kopa, Miloš

    2016-01-01

    Roč. 44, č. 2 (2016), s. 285-289 ISSN 0167-6377 R&D Projects: GA ČR GA13-25911S; GA ČR GA15-00735S Grant - others:GA ČR(CZ) GA15-02938S Institutional support: RVO:67985556 Keywords : Nth order stochastic dominance efficiency * Data envelopment analysis * Convex NSD efficiency * NSD portfolio efficiency Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.657, year: 2016 http://library.utia.cas.cz/separaty/2016/E/branda-0458120.pdf

  17. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido

    2012-02-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.

  18. An efficient two stage stochastic optimal energy and reserve management in a microgrid

    International Nuclear Information System (INIS)

    Mohan, Vivek; Singh, Jai Govind; Ongsakul, Weerakorn

    2015-01-01

    Highlights: • A two stage stochastic optimal energy management for a microgrid is proposed. • It can consider all possible sources and levels of nodal power uncertainties. • In first stage, energy and reserve dispatch for possible uncertainties are estimated. • Second stage fetches and sends dispatch set points directly from the database. • The estimated bounds for the dispatches are more precise and cost effective. - Abstract: In this paper, an efficient two stage stochastic optimal energy and reserve management approach is proposed for a microgrid. In the first stage, the optimal power schedule is determined based on the load, wind and solar power forecasts. The possible uncertainties in forecasts are expressed as perturbations in nodal power injections and the corresponding optimal spinning reserves are estimated using sensitivity analysis. Using this information system, the actual spinning reserve for the discrepancy between the measured and forecasted data is directly dispatched at stage-2, utilizing the remaining capacity of demand response, grid purchase and other non-renewable distributed energy resources (DERs). A stochastic perturbed optimal power flow (OPF) based on affine arithmetic (AA) and stochastic weight tradeoff particle swarm optimization (SWT-PSO) is proposed and investigated on CIGRE LV benchmark microgrid. The approach is found to be better in terms of operational planning, real time computation and bounds of power flow & cost variables.

  19. Simulation of conditional diffusions via forward-reverse stochastic representations

    KAUST Repository

    Bayer, Christian

    2015-01-07

    We derive stochastic representations for the finite dimensional distributions of a multidimensional diffusion on a fixed time interval,conditioned on the terminal state. The conditioning can be with respect to a fixed measurement point or more generally with respect to some subset. The representations rely on a reverse process connected with the given (forward) diffusion as introduced by Milstein, Schoenmakers and Spokoiny in the context of density estimation. The corresponding Monte Carlo estimators have essentially root-N accuracy, and hence they do not suffer from the curse of dimensionality. We also present an application in statistics, in the context of the EM algorithm.

  20. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model

    Science.gov (United States)

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-01

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  1. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  2. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  3. Application of Stochastic Unsaturated Flow Theory, Numerical Simulations, and Comparisons to Field Observations

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Mantoglou, Aristotelis

    1992-01-01

    A stochastic unsaturated flow theory and a numerical simulation model have been coupled in order to estimate the large-scale mean behavior of an unsaturated flow system in a spatially variable soil. On the basis of the theoretical developments of Mantoglou and Gelhar (1987a, b, c), the theory...... unsaturated flow equation representing the mean system behavior is solved using a finite difference numerical solution technique. The effective parameters are evaluated from the stochastic theory formulas before entering them into the numerical solution for each iteration. The stochastic model is applied...... to a field site in Denmark, where information is available on the spatial variability of soil parameters and variables. Numerical simulations have been carried out, and predictions of the mean behavior and the variance of the capillary tension head and the soil moisture content have been compared to field...

  4. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist.

    Directory of Open Access Journals (Sweden)

    Brian Drawert

    2016-12-01

    Full Text Available We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.

  5. Numerical simulation of stochastic point kinetic equation in the dynamical system of nuclear reactor

    International Nuclear Information System (INIS)

    Saha Ray, S.

    2012-01-01

    Highlights: ► In this paper stochastic neutron point kinetic equations have been analyzed. ► Euler–Maruyama method and Strong Taylor 1.5 order method have been discussed. ► These methods are applied for the solution of stochastic point kinetic equations. ► Comparison between the results of these methods and others are presented in tables. ► Graphs for neutron and precursor sample paths are also presented. -- Abstract: In the present paper, the numerical approximation methods, applied to efficiently calculate the solution for stochastic point kinetic equations () in nuclear reactor dynamics, are investigated. A system of Itô stochastic differential equations has been analyzed to model the neutron density and the delayed neutron precursors in a point nuclear reactor. The resulting system of Itô stochastic differential equations are solved over each time-step size. The methods are verified by considering different initial conditions, experimental data and over constant reactivities. The computational results indicate that the methods are simple and suitable for solving stochastic point kinetic equations. In this article, a numerical investigation is made in order to observe the random oscillations in neutron and precursor population dynamics in subcritical and critical reactors.

  6. Operational Efficiency Forecasting Model of an Existing Underground Mine Using Grey System Theory and Stochastic Diffusion Processes

    Directory of Open Access Journals (Sweden)

    Svetlana Strbac Savic

    2015-01-01

    Full Text Available Forecasting the operational efficiency of an existing underground mine plays an important role in strategic planning of production. Degree of Operating Leverage (DOL is used to express the operational efficiency of production. The forecasting model should be able to involve common time horizon, taking the characteristics of the input variables that directly affect the value of DOL. Changes in the magnitude of any input variable change the value of DOL. To establish the relationship describing the way of changing we applied multivariable grey modeling. Established time sequence multivariable response formula is also used to forecast the future values of operating leverage. Operational efficiency of production is often associated with diverse sources of uncertainties. Incorporation of these uncertainties into multivariable forecasting model enables mining company to survive in today’s competitive environment. Simulation of mean reversion process and geometric Brownian motion is used to describe the stochastic diffusion nature of metal price, as a key element of revenues, and production costs, respectively. By simulating a forecasting model, we imitate its action in order to measure its response to different inputs. The final result of simulation process is the expected value of DOL for every year of defined time horizon.

  7. Multivariate stochastic simulation with subjective multivariate normal distributions

    Science.gov (United States)

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  8. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    Science.gov (United States)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method

  9. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  10. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  11. Stochastic Dominance and Omega Ratio: Measures to Examine Market Efficiency, Arbitrage Opportunity, and Anomaly

    Directory of Open Access Journals (Sweden)

    Xu Guo

    2017-10-01

    Full Text Available Both stochastic dominance and Omegaratio can be used to examine whether the market is efficient, whether there is any arbitrage opportunity in the market and whether there is any anomaly in the market. In this paper, we first study the relationship between stochastic dominance and the Omega ratio. We find that second-order stochastic dominance (SD and/or second-order risk-seeking SD (RSD alone for any two prospects is not sufficient to imply Omega ratio dominance insofar that the Omega ratio of one asset is always greater than that of the other one. We extend the theory of risk measures by proving that the preference of second-order SD implies the preference of the corresponding Omega ratios only when the return threshold is less than the mean of the higher return asset. On the other hand, the preference of the second-order RSD implies the preference of the corresponding Omega ratios only when the return threshold is larger than the mean of the smaller return asset. Nonetheless, first-order SD does imply Omega ratio dominance. Thereafter, we apply the theory developed in this paper to examine the relationship between property size and property investment in the Hong Kong real estate market. We conclude that the Hong Kong real estate market is not efficient and there are expected arbitrage opportunities and anomalies in the Hong Kong real estate market. Our findings are useful for investors and policy makers in real estate.

  12. Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models

    KAUST Repository

    Yücel, Abdulkadir C.

    2011-08-01

    Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material, geometrical, and electrical constants) and/or excitation on observables pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis (e.g. voltages across mission-critical circuit elements) [1]. In recent years, stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention [2, 3]. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use existing deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.

  13. Monte-Carlo simulation of a stochastic differential equation

    Science.gov (United States)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  14. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by making...

  15. Simulation modeling and optimization of competitive electricity markets and stochastic fluid systems

    NARCIS (Netherlands)

    Ozdemir, O.

    2013-01-01

    The second part of the thesis focuses on a different stochastic problem of finding the optimal hedging points in a manufacturing flow control system. Our simulation-based method allows solving large scale systems that are considered very difficult to solve by current standards in the literature.

  16. Adaptive finite element method assisted by stochastic simulation of chemical systems

    Czech Academy of Sciences Publication Activity Database

    Cotter, S.L.; Vejchodský, Tomáš; Erban, R.

    2013-01-01

    Roč. 35, č. 1 (2013), B107-B131 ISSN 1064-8275 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional support: RVO:67985840 Keywords : chemical Fokker-Planck * adaptive meshes * stochastic simulation algorithm Subject RIV: BA - General Mathematics Impact factor: 1.940, year: 2013 http://epubs.siam.org/doi/abs/10.1137/120877374

  17. Assessing the potential value for an automated dairy cattle body condition scoring system through stochastic simulation

    NARCIS (Netherlands)

    Bewley, J.M.; Boehlje, M.D.; Gray, A.W.; Hogeveen, H.; Kenyon, S.J.; Eicher, S.D.; Schutz, M.M.

    2010-01-01

    Purpose – The purpose of this paper is to develop a dynamic, stochastic, mechanistic simulation model of a dairy business to evaluate the cost and benefit streams coinciding with technology investments. The model was constructed to embody the biological and economical complexities of a dairy farm

  18. Introducing Stochastic Simulation of Chemical Reactions Using the Gillespie Algorithm and MATLAB: Revisited and Augmented

    Science.gov (United States)

    Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.

    2008-01-01

    The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…

  19. Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.

    Science.gov (United States)

    Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Jonathan Dushoff; Liu, James H

    2017-11-21

    Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017. Published by Elsevier Inc.

  20. A micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław

    2017-01-01

    We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...

  1. A stochastic model simulating the spatiotemporal dynamics of yellow rust on wheat

    DEFF Research Database (Denmark)

    Lett, C.; Østergård, Hanne

    2000-01-01

    A stochastic model of the spatiotemporal dynamics of plant disease epidemics in monocultures is described and applied to the simulation of yellow rust on wheat (Puccinia striiformis f. sp. tritici). The most sensitive parameters of the model are latent period, daily multiplication factor...

  2. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  3. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    Science.gov (United States)

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  4. Role of computational efficiency in process simulation

    Directory of Open Access Journals (Sweden)

    Kurt Strand

    1989-07-01

    Full Text Available It is demonstrated how efficient numerical algorithms may be combined to yield a powerful environment for analysing and simulating dynamic systems. The importance of using efficient numerical algorithms is emphasized and demonstrated through examples from the petrochemical industry.

  5. STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L.) PASTURES IN MARIN, N. L., MEXICO

    OpenAIRE

    José Romualdo Martínez-López; Erasmo Gutierrez-Ornelas; Miguel Angel Barrera-Silva; Rafael Retes-López

    2014-01-01

    A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L.) and its dry matter intake by cattle, in Marín, NL, México. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of...

  6. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  7. Stochastic Simulation of Hourly Average Wind Speed in Umudike ...

    African Journals Online (AJOL)

    Ten years of hourly average wind speed data were used to build a seasonal autoregressive integrated moving average (SARIMA) model. The model was used to simulate hourly average wind speed and recommend possible uses at Umudike, South eastern Nigeria. Results showed that the simulated wind behaviour was ...

  8. A stochastic quasi Newton method for molecular simulations

    NARCIS (Netherlands)

    Chau, Chun Dong

    2010-01-01

    In this thesis the Langevin equation with a space-dependent alternative mobility matrix has been considered. Simulations of a complex molecular system with many different length and time scales based on the fundamental equations of motion take a very long simulation time before capturing the

  9. A stochastic simulation approach for production scheduling and ...

    African Journals Online (AJOL)

    The present paper aims to develop a simulation tool for tile manufacturing companies. The paper shows how simulation approach can be useful to support management decisions related to production scheduling and investment planning. Particularly the aim is to demonstrate the importance of an information system in tile ...

  10. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  11. Measuring the Efficiency of a Hospital based on the Econometric Stochastic Frontier Analysis (SFA) Method.

    Science.gov (United States)

    Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad

    2016-02-01

    Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients' satisfaction, be considered in the future studies to assess hospitals' performances.

  12. Economic Risk and Efficiency Assessment of Fisheries in Abu-Dhabi, United Arab Emirates (UAE: A Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Eihab Fathelrahman

    2014-06-01

    Full Text Available The fishing industry in Abu-Dhabi, United Arab Emirates (UAE, plays an important role in diversifying food sources in order to enhance national food security. The fishing industry is facing an increasing risk that may impact the sustainability (i.e., quantity and quality of the fish caught and consumed in the UAE. Therefore, the main objective of this study is to analyze common Abu-Dhabi fishing management alternatives using various stochastic dominance techniques (i.e., first/second degree stochastic dominance, stochastic dominance with respect to a function and stochastic efficiency with respect to a function to assess the risk facing UAE fishermen. The techniques represent a risk assessment continuum, which can provide a ranking of management alternatives to improve decision making outcomes and help maintain long-term UAE fishing sustainability. Data for the stochastic dominance analyses were obtained from a cross-sectional survey conducted through face-to-face interviews of Abu Dhabi, UAE, fishermen. Analysis of fishing methods, trap sizes and trap numbers using stochastic efficiency with respect to a function (SERF showed that fishermen efficient practices were not the same for risk-neutral fishermen compared to risk averse fishermen. Overall, the stochastic dominance results illustrated the importance of considering both attitude towards risk and economic inefficiencies in managing UAE fishery practices and designing successful fishery policies, as well as improving decision-making at the fishermen level.

  13. GillespieSSA: Implementing the Gillespie Stochastic Simulation Algorithm in R

    Directory of Open Access Journals (Sweden)

    Mario Pineda-Krch

    2008-02-01

    Full Text Available The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. This article presents a simple-to-use and flexible framework for implementing the SSA using the high-level statistical computing language R and the package GillespieSSA. Using three ecological models as examples (logistic growth, Rosenzweig-MacArthur predator-prey model, and Kermack-McKendrick SIRS metapopulation model, this paper shows how a deterministic model can be formulated as a finite-population stochastic model within the framework of SSA theory and how it can be implemented in R. Simulations of the stochastic models are performed using four different SSA Monte Carlo methods: one exact method (Gillespie's direct method; and three approximate methods (explicit, binomial, and optimized tau-leap methods. Comparison of simulation results confirms that while the time-evolution trajectories obtained from the different SSA methods are indistinguishable, the approximate methods are up to four orders of magnitude faster than the exact methods.

  14. Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoonal Asia

    Energy Technology Data Exchange (ETDEWEB)

    Ghil, Michael [Univ. of California, Los Angeles, CA (United States); Robertson, Andrew W. [IRI, Chicago, IL (United States); Cook, Edward R. [LDEO Tree Ring Lab., New York, NY (United States); D’Arrigo, Rosanne [LDEO Tree Ring Lab., New York, NY (United States); Lall, Upmanu [Columbia Water Center, New York, NY (United States); Smyth, Padhraic J. [Univ. of California, Irvine, CA (United States)

    2015-01-18

    We developed further our advanced methods of time series analysis and empirical model reduction (EMR) and applied them to climatic time series relevant to hydroclimate over Monsoonal Asia. The EMR methodology was both generalized further and laid on a rigorous mathematical basis via multilayered stochastic models (MSMs). We identified easily testable conditions that imply the existence of a global random attractor for MSMs and allow for non-polynomial predictors. This existence, in turn, guarantees the numerical stability of the MSMs so obtained. We showed that, in the presence of low-frequency variability (LFV), EMR prediction can be improved further by including information from selected times in the system’s past. This prediction method, dubbed Past-Noise Forecasting (PNF), was successfully applied to the Madden-Julian Oscillation (MJO). Our time series analysis and forecasting methods, based on singular-spectrum analysis (SSA) and its enhancements, were applied to several multi-centennial proxy records provided by the Lamont team. These included the Palmer Drought Severity Index (PDSI) for 1300–2005 from the Monsoonal Asia Drought Atlas (MADA), and a 300-member ensemble of pseudo-reconstructions of Indus River discharge for 1702–2005. The latter was shown to exhibit a robust 27-yr low-frequency mode, which helped multi-decadal retroactive forecasts with no look-ahead over this 300-year interval.

  15. Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

    Directory of Open Access Journals (Sweden)

    Florin-Catalin ENACHE

    2015-10-01

    Full Text Available The growing character of the cloud business has manifested exponentially in the last 5 years. The capacity managers need to concentrate on a practical way to simulate the random demands a cloud infrastructure could face, even if there are not too many mathematical tools to simulate such demands.This paper presents an introduction into the most important stochastic processes and queueing theory concepts used for modeling computer performance. Moreover, it shows the cases where such concepts are applicable and when not, using clear programming examples on how to simulate a queue, and how to use and validate a simulation, when there are no mathematical concepts to back it up.

  16. Evaluating Economic Alternatives for Wood Energy Supply Based on Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Ulises Flores Hernández

    2018-04-01

    Full Text Available Productive forests, as a major source of biomass, represent an important pre-requisite for the development of a bio-economy. In this respect, assessments of biomass availability, efficiency of forest management, forest operations, and economic feasibility are essential. This is certainly the case for Mexico, a country with an increasing energy demand and a considerable potential for sustainable forest utilization. Hence, this paper focuses on analyzing economic alternatives for the Mexican bioenergy supply based on the costs and revenues of utilizing woody biomass residues. With a regional spatial approach, harvesting and transportation costs of utilizing selected biomass residues were stochastically calculated using Monte Carlo simulations. A sensitivity analysis of percentage variation of the most probable estimate in relation to the parameters price and cost for one alternative using net future analysis was conducted. Based on the results for the northern region, a 10% reduction of the transportation cost would reduce overall supply cost, resulting in a total revenue of 13.69 USD/m3 and 0.75 USD/m3 for harvesting residues and non-extracted stand residues, respectively. For the central south region, it is estimated that a contribution of 16.53 USD/m3 from 2013 and a total revenue of 33.00 USD/m3 in 2030 from sawmill residues will improve the value chain. The given approach and outputs provide the basis for the decision-making process regarding forest utilization towards energy generation based on economic indicators.

  17. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    International Nuclear Information System (INIS)

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  18. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2016-01-06

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

  19. A fire management simulation model using stochastic arrival times

    Science.gov (United States)

    Eric L. Smith

    1987-01-01

    Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...

  20. Simulation and Inference for Stochastic Differential Equations With R Examples

    CERN Document Server

    Iacus, Stefano M

    2008-01-01

    Organized into four chapters, this book presents several classes of processes used in mathematics, computational biology, finance and the social sciences. Dealing with simulation schemes, it focuses on parametric estimation techniques. It also contains topics like nonparametric estimation, model identification and change point estimation

  1. Differentials in technical efficiency among smallholder cassava farmers in Central Madagascar: A Cobb Douglas stochastic frontier production approach

    OpenAIRE

    B.C. Okoye; A. Abass; B. Bachwenkizi; G. Asumugha; B. Alenkhe; R. Ranaivoson; R. Randrianarivelo; N. Rabemanantsoa; I. Ralimanana

    2016-01-01

    This study employed the Cobb-Douglas stochastic frontier production function to measure the level of technical efficiency among smallholder cassava farmers in Central Madagascar. A multi-stage random sampling technique was used to select 180 cassava farmers in the region and from this sample, input-output data were obtained using the cost route approach. The parameters of the stochastic frontier production function were estimated using the maximum likelihood method. The results of the analysi...

  2. A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.

  3. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  4. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  5. Application of users’ light-switch stochastic models to dynamic energy simulation

    DEFF Research Database (Denmark)

    Camisassi, V.; Fabi, V.; Andersen, Rune Korsholm

    2015-01-01

    deterministic inputs, due to the uncertain nature of human behaviour. In this paper, new stochastic models of users’ interaction with artificial lighting systems are developed and implemented in the energy simulation software IDA ICE. They were developed from field measurements in an office building in Prague......The design of an innovative building should include building overall energy flows estimation. They are principally related to main six influencing factors (IEA-ECB Annex 53): climate, building envelope and equipment, operation and maintenance, occupant behaviour and indoor environment conditions....... Consequently, energy-related occupant behaviour should be taken into account by energy simulation software. Previous researches (Bourgeois et al. 2006, Buso 2012, Fabi 2012) already revealed the differences in terms of energy loads between considering occupants' behaviour as stochastic processes rather than...

  6. Accurate reaction-diffusion operator splitting on tetrahedral meshes for parallel stochastic molecular simulations

    International Nuclear Information System (INIS)

    Hepburn, I.; De Schutter, E.; Chen, W.

    2016-01-01

    Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realistic biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.

  7. Efficient Galerkin solution of stochastic fractional differential equations using second kind Chebyshev wavelets

    Directory of Open Access Journals (Sweden)

    Fakhrodin Mohammadi

    2017-10-01

    Full Text Available ‎Stochastic fractional differential equations (SFDEs have been used for modeling many physical problems in the fields of turbulance‎, ‎heterogeneous‎, ‎flows and matrials‎, ‎viscoelasticity and electromagnetic theory‎. ‎In this paper‎, ‎an‎ efficient wavelet Galerkin method based on the second kind Chebyshev wavelets are proposed for approximate solution of SFDEs‎. ‎In ‎this ‎app‎roach‎‎, ‎o‎perational matrices of the second kind Chebyshev wavelets ‎are used ‎for reducing SFDEs to a linear system of algebraic equations that can be solved easily‎. ‎C‎onvergence and error analysis of the proposed method is ‎considered‎.‎ ‎Some numerical examples are performed to confirm the applicability and efficiency of the proposed method‎.

  8. Geo-electrical data fusion by stochastic co-conditioning simulations for delineating groundwater protection zones

    OpenAIRE

    Dassargues, Alain

    2006-01-01

    In hydrogeology, advances in the delimitation of protection zones are made by the use of stochastic simulations integrating all available data. In practice, due to the few available measurements of the main parameters (hard data), it is very useful to integrate several secondary properties of the media as indirect data (soft data) to reduce the uncertainty of the results. In aquifers, most of the solute spreading is governed by the hydraulic conductivity (K) spatial variability, which is gene...

  9. Overview of the TurbSim Stochastic Inflow Turbulence Simulator: Version 1.10

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, N. D.; Jonkman, B. J.

    2006-09-01

    The Turbsim stochastic inflow turbulence code was developed to provide a numerical simulation of a full-field flow that contains coherent turbulence structures that reflect the proper spatiotemporal turbulent velocity field relationships seen in instabilities associated with nocturnal boundary layer flows. This report provides the user with an overview of how the TurbSim code has been developed and some of the theory behind that development.

  10. Stochastic effects in real and simulated charged particle beams

    Directory of Open Access Journals (Sweden)

    Jürgen Struckmeier

    2000-03-01

    Full Text Available The Vlasov equation embodies the smooth field approximation of the self-consistent equation of motion for charged particle beams. This framework is fundamentally altered if we include the fluctuating forces that originate from the actual charge granularity. We thereby perform the transition from a reversible description to a statistical mechanics description covering also the irreversible aspects of beam dynamics. Taking into account contributions from fluctuating forces is mandatory if we want to describe effects such as intrabeam scattering or temperature balancing within beams. Furthermore, the appearance of “discreteness errors” in computer simulations of beams can be modeled as “exact” beam dynamics that are being modified by fluctuating “error forces.” It will be shown that the related emittance increase depends on two distinct quantities: the magnitude of the fluctuating forces embodied in a friction coefficient, γ, and the correlation time dependent average temperature anisotropy. These analytical results are verified by various computer simulations.

  11. A stochastic simulation framework for the prediction of strategic noise mapping and occupational noise exposure using the random walk approach.

    Directory of Open Access Journals (Sweden)

    Lim Ming Han

    Full Text Available Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.

  12. Efficient simulation of a tandem Jackson network

    NARCIS (Netherlands)

    Kroese, Dirk; Nicola, V.F.

    2002-01-01

    The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds

  13. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    Science.gov (United States)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  14. Stochastic analysis and simulation of hydrometeorological processes for optimizing hybrid renewable energy systems

    Science.gov (United States)

    Tsekouras, Georgios; Ioannou, Christos; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2013-04-01

    The drawbacks of conventional energy sources including their negative environmental impacts emphasize the need to integrate renewable energy sources into energy balance. However, the renewable sources strongly depend on time varying and uncertain hydrometeorological processes, including wind speed, sunshine duration and solar radiation. To study the design and management of hybrid energy systems we investigate the stochastic properties of these natural processes, including possible long-term persistence. We use wind speed and sunshine duration time series retrieved from a European database of daily records and we estimate representative values of the Hurst coefficient for both variables. We conduct simultaneous generation of synthetic time series of wind speed and sunshine duration, on yearly, monthly and daily scale. To this we use the Castalia software system which performs multivariate stochastic simulation. Using these time series as input, we perform stochastic simulation of an autonomous hypothetical hybrid renewable energy system and optimize its performance using genetic algorithms. For the system design we optimize the sizing of the system in order to satisfy the energy demand with high reliability also minimizing the cost. While the simulation scale is the daily, a simple method allows utilizing the subdaily distribution of the produced wind power. Various scenarios are assumed in order to examine the influence of input parameters, such as the Hurst coefficient, and design parameters such as the photovoltaic panel angle.

  15. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    Science.gov (United States)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  16. Lagrangian stochastic modelling in Large-Eddy Simulation of turbulent particle-laden flows

    Science.gov (United States)

    Chibbaro, Sergio; Innocenti, Alessio; Marchioli, Cristian

    2017-11-01

    Large-Eddy Simulation (LES) in Eulerian-Lagrangian studies of particle-laden flows is one of the most promising and viable approaches when Direct Numerical Simulation (DNS) is not affordable. However applicability of LES to particle-laden flows is limited by the modeling of the Sub-Grid Scale (SGS) turbulence effects on particle dynamics. These effects may be taken into account through a stochastic SGS model for the Equations of Particle Motion (EPM) that extends the Velocity Filtered Density Function method originally developed for reactive flows, to two-phase flows. The underlying filtered density function is simulated through a Lagrangian Monte Carlo procedure, where a set of Stochastic Differential Equations (SDE) is solved along the trajectory of a particle. The resulting Lagrangian stochastic model has been tested for the reference case of turbulent channel flow. Tests with inertial particles have been performed focusing on particle preferential concentration and segregation in the near-wall region: upon comparison with DNS-based statistics, our results show improved accuracy with respect to LES with no SGS model in the EPM for different Stokes numbers. Furthermore, statistics of the particle velocity recover well DNS levels.

  17. A stochastic frontier analysis of technical efficiency of fish cage culture in Peninsular Malaysia.

    Science.gov (United States)

    Islam, Gazi Md Nurul; Tai, Shzee Yew; Kusairi, Mohd Noh

    2016-01-01

    Cage culture plays an important role in achieving higher output and generating more export earnings in Malaysia. However, the cost of fingerlings, feed and labour have increased substantially for cage culture in the coastal areas in Peninsular Malaysia. This paper uses farm level data gathered from Manjung, Perak and Kota Tinggi, Johor to investigate the technical efficiency of brackish water fish cage culture using the stochastic frontier approach. The technical efficiency was estimated and specifically the factors affecting technical inefficiencies of fish cage culture system in Malaysia was investigated. On average, 37 percent of the sampled fish cage farms are technically efficient. The results suggest very high degrees of technical inefficiency exist among the cage culturists. This implies that great potential exists to increase fish production through improved efficiency in cage culture management in Peninsular Malaysia. The results indicate that farmers obtained grouper fingerlings from other neighboring countries due to scarcity of fingerlings from wild sources. The cost of feeding for grouper (Epinephelus fuscoguttatus) requires relatively higher costs compared to seabass (Lates calcarifer) production in cage farms in the study areas. Initiatives to undertake extension programmes at the farm level are needed to help cage culturists in utilizing their resources more efficiently in order to substantially enhance their fish production.

  18. An efficient forward–reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Bayer, Christian

    2016-02-20

    © 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  19. Simulating local measurements on a quantum many-body system with stochastic matrix product states

    DEFF Research Database (Denmark)

    Gammelmark, Søren; Mølmer, Klaus

    2010-01-01

    We demonstrate how to simulate both discrete and continuous stochastic evolutions of a quantum many-body system subject to measurements using matrix product states. A particular, but generally applicable, measurement model is analyzed and a simple representation in terms of matrix product operators...... is found. The technique is exemplified by numerical simulations of the antiferromagnetic Heisenberg spin-chain model subject to various instances of the measurement model. In particular, we focus on local measurements with small support and nonlocal measurements, which induce long-range correlations....

  20. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  1. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  2. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    Science.gov (United States)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  3. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  4. Anisotropic Stochastic Vortex Structure Method for Simulating Particle Collision in Turbulent Shear Flows

    Science.gov (United States)

    Dizaji, Farzad; Marshall, Jeffrey; Grant, John; Jin, Xing

    2017-11-01

    Accounting for the effect of subgrid-scale turbulence on interacting particles remains a challenge when using Reynolds-Averaged Navier Stokes (RANS) or Large Eddy Simulation (LES) approaches for simulation of turbulent particulate flows. The standard stochastic Lagrangian method for introducing turbulence into particulate flow computations is not effective when the particles interact via collisions, contact electrification, etc., since this method is not intended to accurately model relative motion between particles. We have recently developed the stochastic vortex structure (SVS) method and demonstrated its use for accurate simulation of particle collision in homogeneous turbulence; the current work presents an extension of the SVS method to turbulent shear flows. The SVS method simulates subgrid-scale turbulence using a set of randomly-positioned, finite-length vortices to generate a synthetic fluctuating velocity field. It has been shown to accurately reproduce the turbulence inertial-range spectrum and the probability density functions for the velocity and acceleration fields. In order to extend SVS to turbulent shear flows, a new inversion method has been developed to orient the vortices in order to generate a specified Reynolds stress field. The extended SVS method is validated in the present study with comparison to direct numerical simulations for a planar turbulent jet flow. This research was supported by the U.S. National Science Foundation under Grant CBET-1332472.

  5. Technical Efficiency of Teaching Hospitals in Iran: The Use of Stochastic Frontier Analysis, 1999–2011

    Directory of Open Access Journals (Sweden)

    Reza Goudarzi

    2014-07-01

    Full Text Available Background Hospitals are highly resource-dependent settings, which spend a large proportion of healthcare financial resources. The analysis of hospital efficiency can provide insight into how scarce resources are used to create health values. This study examines the Technical Efficiency (TE of 12 teaching hospitals affiliated with Tehran University of Medical Sciences (TUMS between 1999 and 2011. Methods The Stochastic Frontier Analysis (SFA method was applied to estimate the efficiency of TUMS hospitals. A best function, referred to as output and input parameters, was calculated for the hospitals. Number of medical doctors, nurses, and other personnel, active beds, and outpatient admissions were considered as the input variables and number of inpatient admissions as an output variable. Results The mean level of TE was 59% (ranging from 22 to 81%. During the study period the efficiency increased from 61 to 71%. Outpatient admission, other personnel and medical doctors significantly and positively affected the production (P< 0.05. Concerning the Constant Return to Scale (CRS, an optimal production scale was found, implying that the productions of the hospitals were approximately constant. Conclusion Findings of this study show a remarkable waste of resources in the TUMS hospital during the decade considered. This warrants policy-makers and top management in TUMS to consider steps to improve the financial management of the university hospitals.

  6. Examining the determinants of efficiency using a latent class stochastic frontier model

    Directory of Open Access Journals (Sweden)

    Michael Danquah

    2015-12-01

    Full Text Available In this study, we combine the latent class stochastic frontier model with the complex time decay model to form a single-stage approach that accounts for unobserved technological differences to estimate efficiency and the determinants of efficiency. In this way, we contribute to the literature by estimating “pure” efficiency and determinants of productive units based on the class structure. An application of this proposed model is presented using data on the Ghanaian banking system. Our results show that inefficiency effects on the productive unit are specific to the class structure of the productive unit and therefore assuming a common technology for all productive units as is in the popular Battese and Coelli model used extensively in the literature may be misleading. The study therefore provides useful empirical evidence on the importance of accounting for unobserved technological differences across productive units. A policy based on the identified classes of the productive unit enables a more accurate and effectual measures to address efficiency challenges within the banking industry, thereby promoting financial sector development and economic growth.

  7. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  8. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Directory of Open Access Journals (Sweden)

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  9. Analysis and Numerical Simulations of a Stochastic SEIQR Epidemic System with Quarantine-Adjusted Incidence and Imperfect Vaccination

    Directory of Open Access Journals (Sweden)

    Fei Li

    2018-01-01

    Full Text Available This paper considers a high-dimensional stochastic SEIQR (susceptible-exposed-infected-quarantined-recovered epidemic model with quarantine-adjusted incidence and the imperfect vaccination. The main aim of this study is to investigate stochastic effects on the SEIQR epidemic model and obtain its thresholds. We first obtain the sufficient condition for extinction of the disease of the stochastic system. Then, by using the theory of Hasminskii and the Lyapunov analysis methods, we show there is a unique stationary distribution of the stochastic system and it has an ergodic property, which means the infectious disease is prevalent. This implies that the stochastic disturbance is conducive to epidemic diseases control. At last, computer numerical simulations are carried out to illustrate our theoretical results.

  10. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  11. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  12. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Song Hyun Kim

    2015-08-01

    Full Text Available Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  13. Allocative efficiency of smallholder common bean producers in Uganda: A stochastic frontier and Tobit model approach

    Directory of Open Access Journals (Sweden)

    Sibiko, K.W.

    2013-06-01

    Full Text Available The study evaluated allocative efficiency levels of common bean farms in Eastern Uganda and the factors influencing allocative efficiencies of these farms. To achieve this objective, a sample of 480 households was randomly selected in Busia, Mbale, Budaka and Tororo districts in Eastern Uganda. Data was collected using a personally administered structured questionnaire with a focus on household decision makers; whereas a stochastic frontier model and a two limit Tobit regression model were employed in the analysis. It was established that the mean allocative efficiency was 29.37% and it was significantly influenced by farm size, off-farm income, asset value and distance to the market. Therefore the study suggested the need for policies to discourage land fragmentation and promote road and market infrastructure development in the rural areas. The study also revealed the need for farmers to be trained on entrepreneurial skills so that they can invest their farm profits into more income generating activities that will harness more farming capital.

  14. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  15. Efficient Scheme for Chemical Flooding Simulation

    Directory of Open Access Journals (Sweden)

    Braconnier Benjamin

    2014-07-01

    Full Text Available In this paper, we investigate an efficient implicit scheme for the numerical simulation of chemical enhanced oil recovery technique for oil fields. For the sake of brevity, we only focus on flows with polymer to describe the physical and numerical models. In this framework, we consider a black oil model upgraded with the polymer modeling. We assume the polymer only transported in the water phase or adsorbed on the rock following a Langmuir isotherm. The polymer reduces the water phase mobility which can change drastically the behavior of water oil interfaces. Then, we propose a fractional step technique to resolve implicitly the system. The first step is devoted to the resolution of the black oil subsystem and the second to the polymer mass conservation. In such a way, jacobian matrices coming from the implicit formulation have a moderate size and preserve solvers efficiency. Nevertheless, the coupling between the black-oil subsystem and the polymer is not fully resolved. For efficiency and accuracy comparison, we propose an explicit scheme for the polymer for which large time step is prohibited due to its CFL (Courant-Friedrichs-Levy criterion and consequently approximates accurately the coupling. Numerical experiments with polymer are simulated : a core flood, a 5-spot reservoir with surfactant and ions and a 3D real case. Comparisons are performed between the polymer explicit and implicit scheme. They prove that our polymer implicit scheme is efficient, robust and resolves accurately the coupling physics. The development and the simulations have been performed with the software PumaFlow [PumaFlow (2013 Reference manual, release V600, Beicip Franlab].

  16. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis.

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-21

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  17. Estimating US dairy clinical disease costs with a stochastic simulation model.

    Science.gov (United States)

    Liang, D; Arnold, L M; Stowe, C J; Harmon, R J; Bewley, J M

    2017-02-01

    A farm-level stochastic model was used to estimate costs of 7 common clinical diseases in the United States: mastitis, lameness, metritis, retained placenta, left-displaced abomasum, ketosis, and hypocalcemia. The total disease costs were divided into 7 categories: veterinary and treatment, producer labor, milk loss, discarded milk, culling cost, extended days open, and on-farm death. A Monte Carlo simulation with 5,000 iterations was applied to the model to account for inherent system variation. Four types of market prices (milk, feed, slaughter, and replacement cow) and 3 herd-performance factors (rolling herd average, product of heat detection rate and conception rate, and age at first calving) were modeled stochastically. Sensitivity analyses were conducted to study the relationship between total disease costs and selected stochastic factors. In general, the disease costs in multiparous cows were greater than in primiparous cows. Left-displaced abomasum had the greatest estimated total costs in all parities ($432.48 in primiparous cows and $639.51 in multiparous cows). Cost category contributions varied for different diseases and parities. Milk production loss and treatment cost were the 2 greatest cost categories. The effect of market prices were consistent in all diseases and parities; higher milk and replacement prices increased total costs, whereas greater feed and slaughter prices decreased disease costs. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  19. A Simulated Annealing Methodology to Multiproduct Capacitated Facility Location with Stochastic Demand

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2015-01-01

    Full Text Available A stochastic multiproduct capacitated facility location problem involving a single supplier and multiple customers is investigated. Due to the stochastic demands, a reasonable amount of safety stock must be kept in the facilities to achieve suitable service levels, which results in increased inventory cost. Based on the assumption of normal distributed for all the stochastic demands, a nonlinear mixed-integer programming model is proposed, whose objective is to minimize the total cost, including transportation cost, inventory cost, operation cost, and setup cost. A combined simulated annealing (CSA algorithm is presented to solve the model, in which the outer layer subalgorithm optimizes the facility location decision and the inner layer subalgorithm optimizes the demand allocation based on the determined facility location decision. The results obtained with this approach shown that the CSA is a robust and practical approach for solving a multiple product problem, which generates the suboptimal facility location decision and inventory policies. Meanwhile, we also found that the transportation cost and the demand deviation have the strongest influence on the optimal decision compared to the others.

  20. Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations

    KAUST Repository

    Haji-Ali, Abdul Lateef

    2016-05-22

    of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).

  1. Comparing stochastic differential equations and agent-based modelling and simulation for early-stage cancer.

    Science.gov (United States)

    Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe

    2014-01-01

    There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm.

  2. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  3. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix and Polymer Matrix Composite Structures

    Science.gov (United States)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.

    2016-01-01

    Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.

  4. Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.

    Science.gov (United States)

    Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto

    2008-08-01

    Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm).

  5. FEAMAC-CARES Software Coupling Development Effort for CMC Stochastic-Strength-Based Damage Simulation

    Science.gov (United States)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  6. Integrated logistic support studies using behavioral Monte Carlo simulation, supported by Generalized Stochastic Petri Nets

    International Nuclear Information System (INIS)

    Garnier, Robert; Chevalier, Marcel

    2000-01-01

    Studying large and complex industrial sites, requires more and more accuracy in modeling. In particular, when considering Spares, Maintenance and Repair / Replacement processes, determining optimal Integrated Logistic Support policies requires a high level modeling formalism, in order to make the model as close as possible to the real considered processes. Generally, numerical methods are used to process this kind of study. In this paper, we propose an alternate way to process optimal Integrated Logistic Support policy determination when dealing with large, complex and distributed multi-policies industrial sites. This method is based on the use of behavioral Monte Carlo simulation, supported by Generalized Stochastic Petri Nets. (author)

  7. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite

    Science.gov (United States)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu

    2015-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  8. FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites

    Science.gov (United States)

    Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna

    2016-01-01

    Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.

  9. Economic Efficiency of Small-Holder Cocoyam Farmers in Anambra State, Nigeria: A Translog Stochastic Frontier Cost Function Approach

    OpenAIRE

    Okoye, B.C; Onyenweaku, C.E; Asumugha, G.N

    2007-01-01

    This study employed a translog stochastic frontier cost function to measure the level of economic efficiency and it’s determinants in small-holder cocoyam production in Anambra state, Nigeria. A multi-stage random sampling technique was used to select 120 cocoyam farmers in the state in 2005 from whom input-output data and their prices were obtained using the cost-route approach. The parameters of the stochastic frontier cost function were estimated using the maximum likelihood method. The re...

  10. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  11. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    Energy Technology Data Exchange (ETDEWEB)

    Zabaras, Nicolas J. [Cornell Univ., Ithaca, NY (United States)

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  12. Testing the new stochastic neutronic code ANET in simulating safety important parameters

    International Nuclear Information System (INIS)

    Xenofontos, T.; Delipei, G.-K.; Savva, P.; Varvayanni, M.; Maillard, J.; Silva, J.; Catsaros, N.

    2017-01-01

    Highlights: • ANET is a new neutronics stochastic code. • Criticality calculations in both subcritical and critical nuclear systems of conventional design were conducted. • Simulations of thermal, lower epithermal and fast neutron fluence rates were performed. • Axial fission rate distributions in standard and MOX fuel pins were computed. - Abstract: ANET (Advanced Neutronics with Evolution and Thermal hydraulic feedback) is an under development Monte Carlo code for simulating both GEN II/III reactors as well as innovative nuclear reactor designs, based on the high energy physics code GEANT3.21 of CERN. ANET is built through continuous GEANT3.21 applicability amplifications, comprising the simulation of particles’ transport and interaction in low energy along with the accessibility of user-provided libraries and tracking algorithms for energies below 20 MeV, as well as the simulation of elastic and inelastic collision, capture and fission. Successive testing applications performed throughout the ANET development have been utilized to verify the new code capabilities. In this context the ANET reliability in simulating certain reactor parameters important to safety is here examined. More specifically the reactor criticality as well as the neutron fluence and fission rates are benchmarked and validated. The Portuguese Research Reactor (RPI) after its conversion to low enrichment in U-235 and the OECD/NEA VENUS-2 MOX international benchmark were considered appropriate for the present study, the former providing criticality and neutron flux data and the latter reaction rates. Concerning criticality benchmarking, the subcritical, Training Nuclear Reactor of the Aristotle University of Thessaloniki (TNR-AUTh) was also analyzed. The obtained results are compared with experimental data from the critical infrastructures and with computations performed by two different, well established stochastic neutronics codes, i.e. TRIPOLI-4.8 and MCNP5. Satisfactory agreement

  13. Fast stochastic simulation of biochemical reaction systems by alternative formulations of the chemical Langevin equation

    KAUST Repository

    Mélykúti, Bence

    2010-01-01

    The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when simulating (bio)chemical kinetics. The CLE model is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. The contribution of this work is that we observe and explore that the CLE is not a single equation, but a parametric family of equations, all of which give the same finite-dimensional distribution of the variables. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix. On the practical side, we show that in the case where there are m1 pairs of reversible reactions and m2 irreversible reactions there is another, simple formulation of the CLE with only m1 + m2 Wiener processes, whereas the standard approach uses 2 m1 + m2. We demonstrate that there are considerable computational savings when using this latter formulation. Such transformations of the CLE do not cause a loss of accuracy and are therefore distinct from model reduction techniques. We illustrate our findings by considering alternative formulations of the CLE for a human ether a-go-go related gene ion channel model and the Goldbeter-Koshland switch. © 2010 American Institute of Physics.

  14. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation

    Directory of Open Access Journals (Sweden)

    Thomas von Larcher

    2015-04-01

    Full Text Available We focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES codes. Our concept is based on the integral conservation laws for mass, momentum and energy of a flow field. We model the space-time structure of the flux correction terms to create a discrete formulation. Advanced methods of time series analysis for the data-based construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach. The reconstruction capabilities of the modelling ansatz are tested against fully 3D turbulent channel flow data computed by direct numerical simulation and, in addition, against a turbulent Taylor-Green vortex flow showing a transition from laminar to a turbulent flow state. The modelling approach for the LES closure is different in both test cases. In the channel flow we consider an implicit LES ansatz. In the Taylor-Green vortex flow, it follows an explicit closure approach. We present here the outcome of our reconstruction tests and show specific results of the non-trivial time series data analysis. Started with a generally stochastic ansatz we found, surprisingly, that the deterministic model part already yields small residuals and is, therefore, good enough to fit the flux correction terms well. In the Taylor-Green vortex flow, we found additionally time-dependent features confirming that our modelling approach is capable of detecting changes in the temporal structure of the flow. The results encourage us to launch a more ambitious attempt at dynamic LES closure along these lines.

  15. Simulation of Higher-Order Electrical Circuits with Stochastic Parameters via SDEs

    Directory of Open Access Journals (Sweden)

    BRANCIK, L.

    2013-02-01

    Full Text Available The paper deals with a technique for the simulation of higher-order electrical circuits with parameters varying randomly. The principle consists in the utilization of the theory of stochastic differential equations (SDE, namely the vector form of the ordinary SDEs. Random changes of both excitation voltage and some parameters of passive circuit elements are considered, and circuit responses are analyzed. The voltage and/or current responses are computed and represented in the form of the sample means accompanied by their confidence intervals to provide reliable estimates. The method is applied to analyze responses of the circuit models of optional orders, specially those consisting of a cascade connection of the RLGC networks. To develop the model equations the state-variable method is used, afterwards a corresponding vector SDE is formulated and a stochastic Euler numerical method applied. To verify the results the deterministic responses are also computed by the help of the PSpice simulator or the numerical inverse Laplace transforms (NILT procedure in MATLAB, while removing random terms from the circuit model.

  16. Meta-stochastic simulation of biochemical models for systems and synthetic biology.

    Science.gov (United States)

    Sanassy, Daven; Widera, Paweł; Krasnogor, Natalio

    2015-01-16

    Stochastic simulation algorithms (SSAs) are used to trace realistic trajectories of biochemical systems at low species concentrations. As the complexity of modeled biosystems increases, it is important to select the best performing SSA. Numerous improvements to SSAs have been introduced but they each only tend to apply to a certain class of models. This makes it difficult for a systems or synthetic biologist to decide which algorithm to employ when confronted with a new model that requires simulation. In this paper, we demonstrate that it is possible to determine which algorithm is best suited to simulate a particular model and that this can be predicted a priori to algorithm execution. We present a Web based tool ssapredict that allows scientists to upload a biochemical model and obtain a prediction of the best performing SSA. Furthermore, ssapredict gives the user the option to download our high performance simulator ngss preconfigured to perform the simulation of the queried biochemical model with the predicted fastest algorithm as the simulation engine. The ssapredict Web application is available at http://ssapredict.ico2s.org. It is free software and its source code is distributed under the terms of the GNU Affero General Public License.

  17. Analysis of the CVT Efficiency by Simulation

    Directory of Open Access Journals (Sweden)

    Valerian Croitorescu

    2011-09-01

    Full Text Available All vehicle manufacturers desire an ideal vehicle that has the highest powertrain efficiency, best safety factor and ease of maintenance while being environmentally friendly. These highly valued vehicle development characteristics are only reachable after countless research hours. One major powertrain component to be studied in relation to these demands is the Continuous Variable Transmission that a Hybrid Electric Vehicle is equipped with. The CVT can increase the overall powertrain efficiency, offering a continuum variable gear ratios between established minimum and maximum limits. This paper aims to determine the losses of a CVT, operating on a HEV. Using simulation, the losses were computed and the fuel economy was analyzed. During various modes of operation, such as electric, regenerative braking, engine charge for maintaining the battery state of charge, the losses and their dependence with the control properties were analyzed. A relevant determination of precise losses is able to reduce them by using appropriate materials for their components and fluids, more efficient technical manufacturing and usage solutions and innovative control strategy.

  18. Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.

    of the problem, i.e. the number of state variables of the dynamical systems. In contrast, the application of the simulation techniques is not limited to Markov problems, nor is it dependent on the mean rate of impulses. Moreover their use is straightforward for a large class of point processes, at least......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically......-numerical techniques suitable for Markov response problems such as moments equation, Petrov-Galerkin and cell-to-cell mapping techniques are briefly discussed. Usefulness of these techniques is limited by the fact that effectiveness of each of them depends on the mean rate of impulses. Another limitation is the size...

  19. Nonperiodic stochastic boundary conditions for molecular dynamics simulations of materials embedded into a continuum mechanics domain.

    Science.gov (United States)

    Rahimi, Mohammad; Karimi-Varzaneh, Hossein Ali; Böhm, Michael C; Müller-Plathe, Florian; Pfaller, Sebastian; Possart, Gunnar; Steinmann, Paul

    2011-04-21

    A scheme is described for performing molecular dynamics simulations on polymers under nonperiodic, stochastic boundary conditions. It has been designed to allow later the embedding of a particle domain treated by molecular dynamics into a continuum environment treated by finite elements. It combines, in the boundary region, harmonically restrained particles to confine the system with dissipative particle dynamics to dissipate energy and to thermostat the simulation. The equilibrium position of the tethered particles, the so-called anchor points, are well suited for transmitting deformations, forces and force derivatives between the particle and continuum domains. In the present work the particle scheme is tested by comparing results for coarse-grained polystyrene melts under nonperiodic and regular periodic boundary conditions. Excellent agreement is found for thermodynamic, structural, and dynamic properties.

  20. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    Science.gov (United States)

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw  100 km).

  1. Improving the bearing fault diagnosis efficiency by the adaptive stochastic resonance in a new nonlinear system

    Science.gov (United States)

    Liu, Xiaole; Liu, Houguang; Yang, Jianhua; Litak, Grzegorz; Cheng, Gang; Han, Shuai

    2017-11-01

    It is a challenging task to detect the weak character signal in the noisy background. The stochastic resonance (SR) method has been wildly adopted recently because it can not only reduce the noise, but also enhance the weak feature information simultaneously. However, the traditional bistable model for SR is not perfect. So, this paper presents a new model with periodic potential to induce the adaptive SR. In the new model, based on the adaptive SR theory, the system parameters are simultaneously optimized by the improved artificial fish swarm algorithm. Meanwhile, the improved signal-to-noise ratio (ISNR) is set as the evaluation index. When the ISNR reaches a maximum, the output is optimal. In order to eliminate interference to obtain more useful information, the signals are preprocessed by Hilbert transform and High-pass filter before being input to the adaptive SR system. To verify the effectiveness of the proposed method, both numerical simulation and the vibration signal of the rolling element bearing from the lab experimental are adopted. Both of the results indicate that the adaptive SR model proposed shows better performance in weak character signals detection than the traditional adaptive SR in the bistable model. Meanwhile, the experimental signals with different working conditions are also processed by the new method. The results show that the method proposed could be more widely applied.

  2. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  3. The impact of agricultural extension on farmers’ technical efficiencies in Ethiopia: A stochastic production frontier approach

    Directory of Open Access Journals (Sweden)

    Kidanemariam G. Gebrehiwot

    2017-06-01

    Full Text Available Background: To address the structural food deficit and top down extension system that persisted for decades, the government of Ethiopia has introduced a new extension system, called Participatory Demonstration and Training Extension Systems, which serves more than 80% of the total population. As the program was streamlined to fit the different agro-climatic condition of the country, the extension approach practiced in the Tigray region (research area was called Integrated Household Extension Program. Aim: This article reports on research aimed at measuring the technical efficiency levels of extension participants and non-participants; measuring the impact extension service on technical efficiency. Setting: The research was conducted in the northern part of the country, where agriculture is the main sources of livelihoods. Moisture is the most critical factor in the production system. The land holding size averages 0.5 ha per household compared to above three ha 30 years ago; indicating the high population pressure in the area. Methods: A sample of 362 agricultural extension service participants and 369 non-participant farm households from the northern part of Ethiopia, participated in the study. The stochastic production frontier technique was used to analyse the survey data and to compute farm-level technical efficiency. Results: The results showed an average level of technical efficiency of 48%. It is suggested that substantial gains in output and/or decrease in cost can be attained with the existing technology. All the variables included in the model to explain efficiency were found significant and with the expected sign, except education and number of dependants. Conclusion: The research tried to assess the impact of a new extension service (participatory in nature on farmers’ productivity in a semi-arid zone, as compared with the conventional extension service and found in the literature areas with relatively better climatic

  4. Measuring efficiency of governmental hospitals in Palestine using stochastic frontier analysis.

    Science.gov (United States)

    Hamidi, Samer

    2016-01-01

    The Palestinian government has been under increasing pressure to improve provision of health services while seeking to effectively employ its scare resources. Governmental hospitals remain the leading costly units as they consume about 60 % of governmental health budget. A clearer understanding of the technical efficiency of hospitals is crucial to shape future health policy reforms. In this paper, we used stochastic frontier analysis to measure technical efficiency of governmental hospitals, the first of its kind nationally. We estimated maximum likelihood random-effects and time-invariant efficiency model developed by Battese and Coelli, 1988. Number of beds, number of doctors, number of nurses, and number of non-medical staff, were used as the input variables, and sum of number of treated inpatients and outpatients was used as output variable. Our dataset includes balanced panel data of 22 governmental hospitals over a period of 6 years. Cobb-Douglas function, translog function, and multi-output distance function were estimated using STATA 12. The average technical efficiency of hospitals was approximately 55 %, and ranged from 28 to 91 %. Doctors and nurses appear to be the most important factors in hospital production, as 1 % increase in number of doctors, results in an increase in the production of the hospital of 0.33 and 0.51 %, respectively. If hospitals increase all inputs by 1 %, their production would increase by 0.74 %. Hospitals production process has a decrease return to scale. Despite continued investment in governmental hospitals, they remained relatively inefficient. Using the existing amount of resources, the amount of delivered outputs can be improved 45 % which provides insight into mismanagement of available resources. To address hospital inefficiency, it is important to increase the numbers of doctors and nurses. The number of non-medical staff should be reduced. Offering the option of early retirement, limit hiring, and transfer to

  5. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.

    Science.gov (United States)

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J

    2017-07-15

    Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Efficiency measurement using econometric stochastic frontier analysis (SFA method, Case study: hospitals of Kermanshah University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Reza Goudarzi

    2014-01-01

    Full Text Available Background: Full consideration of the performance and efficiency of hospital costs necessitates the application of economic analysis techniques. The aim of this study was to assess the efficiency of hospitals in Kermanshah University of Medical Sciences through Stochastic Frontier Analysis (SFA method. Methods: The performance of Kermanshah hospitals (n=7 was assessed and analyzed by Stochastic Frontier Analysis (SFA method during 2005-2011. Inpatient admission was considered as the output variable, while the number of medical doctors, nursing staff other personnel, active beds and outpatient admission were considered as the input variables. Frontier 4.1 software was used to analyze the data. Results: Based on the results of performance evaluation using Cobb-Douglas production function, the mean efficiency score of the hospitals in the SFA method was 0.63. Also, the efficiency capacity in these hospitals could be promoted up to 37 percent. Conclusion: Based on the results of Stochastic Frontier Analysis, downsizing the manpower in hospitals plays a major role in reducing hospital costs and improving their performance. Finally, it is necessary to investigate the effect of factors such as quality of service and patient satisfaction on hospital performance.

  7. Verification of HYDRASTAR - A code for stochastic continuum simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1991-07-01

    HYDRASTAR is a code developed at Starprog AB for use in the SKB 91 performance assessment project with the following principal function: - Reads the actual conductivity measurements from a file created from the data base GEOTAB. - Regularizes the measurements to a user chosen calculation scale. - Generates three dimensional unconditional realizations of the conductivity field by using a supplied model of the conductivity field as a stochastic function. - Conditions the simulated conductivity field on the actual regularized measurements. - Reads the boundary conditions from a regional deterministic NAMMU computation. - Calculates the hydraulic head field, Darcy velocity field, stream lines and water travel times by solving the stationary hydrology equation and the streamline equation obtained with the velocities calculated from Darcy's law. - Generates visualizations of the realizations if desired. - Calculates statistics such as semivariograms and expectation values of the output fields by repeating the above procedure by iterations of the Monte Carlo type. When using computer codes for safety assessment purpose validation and verification of the codes are important. Thus this report describes a work performed with the goal of verifying parts of HYDRASTAR. The verification described in this report uses comparisons with two other solutions of related examples: A. Comparison with a so called perturbation solution of the stochastical stationary hydrology equation. This as an analytical approximation of the stochastical stationary hydrology equation valid in the case of small variability of the unconditional random conductivity field. B. Comparison with the (Hydrocoin, 1988), case 2. This is a classical example of a hydrology problem with a deterministic conductivity field. The principal feature of the problem is the presence of narrow fracture zones with high conductivity. the compared output are the hydraulic head field and a number of stream lines originating from a

  8. A stochastic simulator of a blood product donation environment with demand spikes and supply shocks.

    Science.gov (United States)

    An, Ming-Wen; Reich, Nicholas G; Crawford, Stephen O; Brookmeyer, Ron; Louis, Thomas A; Nelson, Kenrad E

    2011-01-01

    The availability of an adequate blood supply is a critical public health need. An influenza epidemic or another crisis affecting population mobility could create a critical donor shortage, which could profoundly impact blood availability. We developed a simulation model for the blood supply environment in the United States to assess the likely impact on blood availability of factors such as an epidemic. We developed a simulator of a multi-state model with transitions among states. Weekly numbers of blood units donated and needed were generated by negative binomial stochastic processes. The simulator allows exploration of the blood system under certain conditions of supply and demand rates, and can be used for planning purposes to prepare for sudden changes in the public's health. The simulator incorporates three donor groups (first-time, sporadic, and regular), immigration and emigration, deferral period, and adjustment factors for recruitment. We illustrate possible uses of the simulator by specifying input values for an 8-week flu epidemic, resulting in a moderate supply shock and demand spike (for example, from postponed elective surgeries), and different recruitment strategies. The input values are based in part on data from a regional blood center of the American Red Cross during 1996-2005. Our results from these scenarios suggest that the key to alleviating deficit effects of a system shock may be appropriate timing and duration of recruitment efforts, in turn depending critically on anticipating shocks and rapidly implementing recruitment efforts.

  9. STOCHASTIC SIMULATION FOR BUFFELGRASS (Cenchrus ciliaris L. PASTURES IN MARIN, N. L., MEXICO

    Directory of Open Access Journals (Sweden)

    José Romualdo Martínez-López

    2014-04-01

    Full Text Available A stochastic simulation model was constructed to determine the response of net primary production of buffelgrass (Cenchrus ciliaris L. and its dry matter intake by cattle, in Marín, NL, México. Buffelgrass is very important for extensive livestock industry in arid and semiarid areas of northeastern Mexico. To evaluate the behavior of the model by comparing the model results with those reported in the literature was the objective in this experiment. Model simulates the monthly production of dry matter of green grass, as well as its conversion to senescence and dry grass and eventually to mulch, depending on precipitation and temperature. Model also simulates consumption of green and dry grass for cattle. The stocking rate used in the model simulation was 2 hectares per animal unit. Annual production ranged from 4.5 to 10.2 t of dry matter per hectare with annual rainfall of 300 to 704 mm, respectively. Total annual intake required per animal unit was estimated at 3.6 ton. Simulated net primary production coincides with reports in the literature, so the model was evaluated successfully.

  10. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    , that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...

  11. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  12. Stochastic model for simulating Souris River Basin precipitation, evapotranspiration, and natural streamflow

    Science.gov (United States)

    Kolars, Kelsey A.; Vecchia, Aldo V.; Ryberg, Karen R.

    2016-02-24

    The Souris River Basin is a 61,000-square-kilometer basin in the Provinces of Saskatchewan and Manitoba and the State of North Dakota. In May and June of 2011, record-setting rains were seen in the headwater areas of the basin. Emergency spillways of major reservoirs were discharging at full or nearly full capacity, and extensive flooding was seen in numerous downstream communities. To determine the probability of future extreme floods and droughts, the U.S. Geological Survey, in cooperation with the North Dakota State Water Commission, developed a stochastic model for simulating Souris River Basin precipitation, evapotranspiration, and natural (unregulated) streamflow. Simulations from the model can be used in future studies to simulate regulated streamflow, design levees, and other structures; and to complete economic cost/benefit analyses.Long-term climatic variability was analyzed using tree-ring chronologies to hindcast precipitation to the early 1700s and compare recent wet and dry conditions to earlier extreme conditions. The extended precipitation record was consistent with findings from the Devils Lake and Red River of the North Basins (southeast of the Souris River Basin), supporting the idea that regional climatic patterns for many centuries have consisted of alternating wet and dry climate states.A stochastic climate simulation model for precipitation, temperature, and potential evapotranspiration for the Souris River Basin was developed using recorded meteorological data and extended precipitation records provided through tree-ring analysis. A significant climate transition was seen around1970, with 1912–69 representing a dry climate state and 1970–2011 representing a wet climate state. Although there were some distinct subpatterns within the basin, the predominant differences between the two states were higher spring through early fall precipitation and higher spring potential evapotranspiration for the wet compared to the dry state.A water

  13. A stochastic model for simulation of the economic consequences of bovine virus diarrhoea virus infection in a dairy herd

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten; Houe, H.

    1995-01-01

    A dynamic, stochastic model simulating the technical and economic consequences of bovine virus diarrhoea virus (BVDV) infections for a dairy cattle herd for use on a personal computer was developed. The production and state changes of the herd were simulated by state changes of the individual cows...... modelling principle. The validation problem in relation to the model was discussed. A comparison between real and simulated data using data from a published case report was shown to illustrate how user acceptance can be obtained....

  14. Efficiency Loss of Mixed Equilibrium Associated with Altruistic Users and Logit-based Stochastic Users in Transportation Network

    Directory of Open Access Journals (Sweden)

    Xiao-Jun Yu

    2014-02-01

    Full Text Available The efficiency loss of mixed equilibrium associated with two categories of users is investigated in this paper. The first category of users are altruistic users (AU who have the same altruism coefficient and try to minimize their own perceived cost that assumed to be a linear combination of selfish com­ponent and altruistic component. The second category of us­ers are Logit-based stochastic users (LSU who choose the route according to the Logit-based stochastic user equilib­rium (SUE principle. The variational inequality (VI model is used to formulate the mixed route choice behaviours associ­ated with AU and LSU. The efficiency loss caused by the two categories of users is analytically derived and the relations to some network parameters are discussed. The numerical tests validate our analytical results. Our result takes the re­sults in the existing literature as its special cases.

  15. Techno-economic simulation data based deterministic and stochastic for product engineering research and development BATAN

    International Nuclear Information System (INIS)

    Petrus Zacharias; Abdul Jami

    2010-01-01

    Researches conducted by Batan's researchers have resulted in a number competences that can be used to produce goods and services, which will be applied to industrial sector. However, there are difficulties how to convey and utilize the R and D products into industrial sector. Evaluation results show that each research result should be completed with techno-economy analysis to obtain the feasibility of a product for industry. Further analysis on multy-product concept, in which one business can produce many main products, will be done. For this purpose, a software package simulating techno-economy I economic feasibility which uses deterministic and stochastic data (Monte Carlo method) was been carried out for multi-product including side product. The programming language used in Visual Basic Studio Net 2003 and SQL as data base processing software. This software applied sensitivity test to identify which investment criteria is sensitive for the prospective businesses. Performance test (trial test) has been conducted and the results are in line with the design requirement, such as investment feasibility and sensitivity displayed deterministically and stochastically. These result can be interpreted very well to support business decision. Validation has been performed using Microsoft Excel (for single product). The result of the trial test and validation show that this package is suitable for demands and is ready for use. (author)

  16. MarkoLAB: A simulator to study ionic channel's stochastic behavior.

    Science.gov (United States)

    da Silva, Robson Rodrigues; Goroso, Daniel Gustavo; Bers, Donald M; Puglisi, José Luis

    2017-08-01

    Mathematical models of the cardiac cell have started to include markovian representations of the ionic channels instead of the traditional Hodgkin & Huxley formulations. There are many reasons for this: Markov models are not restricted to the idea of independent gates defining the channel, they allow more complex description with specific transitions between open, closed or inactivated states, and more importantly those states can be closely related to the underlying channel structure and conformational changes. We used the LabVIEW ® and MATLAB ® programs to implement the simulator MarkoLAB that allow a dynamical 3D representation of the markovian model of the channel. The Monte Carlo simulation was used to implement the stochastic transitions among states. The user can specify the voltage protocol by setting the holding potential, the step-to voltage and the duration of the stimuli. The most studied feature of a channel is the current flowing through it. This happens when the channel stays in the open state, but most of the time, as revealed by the low open probability values, the channel remains on the inactive or closed states. By focusing only when the channel enters or leaves the open state we are missing most of its activity. MarkoLAB proved to be quite useful to visualize the whole behavior of the channel and not only when the channel produces a current. Such dynamic representation provides more complete information about channel kinetics and will be a powerful tool to demonstrate the effect of gene mutations or drugs on the channel function. MarkoLAB provides an original way of visualizing the stochastic behavior of a channel. It clarifies concepts, such as recovery from inactivation, calcium- versus voltage-dependent inactivation, and tail currents. It is not restricted to ionic channels only but it can be extended to other transporters, such as exchangers and pumps. This program is intended as a didactical tool to illustrate the dynamical behavior of a

  17. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Science.gov (United States)

    Guerrier, C.; Holcman, D.

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  18. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Energy Technology Data Exchange (ETDEWEB)

    Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  19. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    Science.gov (United States)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  20. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    International Nuclear Information System (INIS)

    Guerrier, C.; Holcman, D.

    2017-01-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  1. Optimizing JPC-based remote entanglement of transmon qubits via stochastic master equation simulations

    Science.gov (United States)

    Zalys-Geller, E.; Hatridge, M.; Silveri, M.; Narla, A.; Sliwa, K. M.; Shankar, S.; Girvin, S. M.; Devoret, M. H.

    2015-03-01

    Remote entanglement of two superconducting qubits may be accomplished by first entangling them with flying coherent microwave pulses, and then erasing the which-path information of these pulses by using a non-degenerate parametric amplifier such as the Josephson Parametric Converter (JPC). Crucially, this process requires no direct interaction between the two qubits. The JPC, however, will fail to completely erase the which-path information if the flying microwave pulses encode any difference in dynamics of the two qubit-cavity systems. This which-path information can easily arise from mismatches in the cavity linewidths and the cavity dispersive shifts from their respective qubits. Through analysis of the Stochastic Master Equation for this system, we have found a strategy for shaping the measurement pulses to eliminate the effect of these mismatches on the entangling measurement. We have then confirmed the effectiveness of this strategy by numerical simulation. Work supported by: IARPA, ARO, and NSF.

  2. A two-state stochastic model for nanoparticle self-assembly: theory, computer simulations and applications

    International Nuclear Information System (INIS)

    Schwen, E M; Mazilu, I; Mazilu, D A

    2015-01-01

    We introduce a stochastic cooperative model for particle deposition and evaporation relevant to ionic self-assembly of nanoparticles with applications in surface fabrication and nanomedicine, and present a method for mapping our model onto the Ising model. The mapping process allows us to use the established results for the Ising model to describe the steady-state properties of our system. After completing the mapping process, we investigate the time dependence of particle density using the mean field approximation. We complement this theoretical analysis with Monte Carlo simulations that support our model. These techniques, which can be used separately or in combination, are useful as pedagogical tools because they are tractable mathematically and they apply equally well to many other physical systems with nearest-neighbour interactions including voter and epidemic models. (paper)

  3. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    Science.gov (United States)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  4. The effect of regulatory governance on efficiency of thermal power generation in India: A stochastic frontier analysis

    International Nuclear Information System (INIS)

    Ghosh, Ranjan; Kathuria, Vinish

    2016-01-01

    This paper investigates the impact of institutional quality – typified as regulatory governance – on the performance of thermal power plants in India. The Indian power sector was reformed in the early 1990s. However, reforms are effective only as much as the regulators are committed in ensuring that they are implemented. We hypothesize that higher the quality of regulation in a federal Indian state, higher is the efficiency of electric generation utilities. A translog stochastic frontier model is estimated using index of state-level independent regulation as one of the determinants of inefficiency. The dataset comprises a panel of 77 coal-based thermal power plants during the reform period covering over 70% of installed electricity generation capacity. The mean technical efficiency of 76.7% indicates there is wide scope for efficiency improvement in the sector. Results are robust to various model specifications and show that state-level regulators have positively impacted plant performance. Technical efficiency is sensitive to both unbundling of state utilities, and regulatory experience. The policy implication is that further reforms which empower independent regulators will have far reaching impacts on power sector performance. - Highlights: • The impact of regulatory governance on Indian generation efficiency is investigated. • Stochastic frontier analysis (SFA) on a panel dataset covering pre and post reform era. • Index of state-wise variation in regulation to explain inefficiency effects. • Results show improved but not very high technical efficiencies. • State-level regulation has positively impacted power plant performance.

  5. The High-Throughput Stochastic Human Exposure and Dose Simulation Model (SHEDS-HT) & The Chemical and Products Database (CPDat)

    Science.gov (United States)

    The Stochastic Human Exposure and Dose Simulation Model – High-Throughput (SHEDS-HT) is a U.S. Environmental Protection Agency research tool for predicting screening-level (low-tier) exposures to chemicals in consumer products. This course will present an overview of this m...

  6. Using stochastic borehole seismic velocity tomography and Bayesian simulation to estimate Ni, Cu and Co grades.

    Science.gov (United States)

    Perozzi, Lorenzo; Gloaguen, Erwan; Rondenay, Stephane; Leite, André; McDowell, Glenn; Wheeler, Robert

    2010-05-01

    In the mining industry, classic methods to build a grade model for ore deposits are based on kriging or cokriging of grades for targeted minerals measured in drill core in fertile geological units. As the complexity of the geological geometry increases, so does the complexity of grade estimations. For example, in layered mafic or ultramafic intrusions, it is necessary to know the layering geometry in order to perform kriging of grades in the most fertile zones. Without additional information on geological framwork, the definition of fertile zones is a low-precision exercise that requires extensive experience and good ability from the geologist. Recently, thanks to computer and geophysical tool improvements, seismic tomography became very attractive for many application fields. Indeed, this non-intrusive technique allows inferring the mechanical properties of the ground using travel times and amplitude analysis of the transmitted wavelet between two boreholes, hence provide additional information on the nature of the deposit. Commonly used crosshole seismic velocity tomography algorithms estimate 2D slowness models (inverse of velocity) in the plane between the boreholes using the measured direct wave travel times from the transmitter (located in one of the hole) to the receivers (located in the other hole). Furthermore, geophysical borehole logging can be used to constrain seismic tomography between drill holes. Finally, this project aims to estimate grade of economically worth mineral by integrating seismic tomography data with respectively drill core measured grades acquired by Vale Inco for one of their mine sites in operation. In this study, a new type algorithm that combines geostatistical simulation and tomography in the same process (namely stochastic tomography) has been used. The principle of the stochastic tomography is based on the straight ray approximation and use the linear relationship between travel time and slowness to estimate the slowness

  7. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  8. Memetic algorithm for real-time combinatorial stochastic simulation optimization problems with performance analysis.

    Science.gov (United States)

    Horng, Shih-Cheng; Lin, Shin-Yeu; Lee, Loo Hay; Chen, Chun-Hung

    2013-10-01

    A three-phase memetic algorithm (MA) is proposed to find a suboptimal solution for real-time combinatorial stochastic simulation optimization (CSSO) problems with large discrete solution space. In phase 1, a genetic algorithm assisted by an offline global surrogate model is applied to find N good diversified solutions. In phase 2, a probabilistic local search method integrated with an online surrogate model is used to search for the approximate corresponding local optimum of each of the N solutions resulted from phase 1. In phase 3, the optimal computing budget allocation technique is employed to simulate and identify the best solution among the N local optima from phase 2. The proposed MA is applied to an assemble-to-order problem, which is a real-world CSSO problem. Extensive simulations were performed to demonstrate its superior performance, and results showed that the obtained solution is within 1% of the true optimum with a probability of 99%. We also provide a rigorous analysis to evaluate the performance of the proposed MA.

  9. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    Science.gov (United States)

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  10. INCLUDING RISK IN ECONOMIC FEASIBILITY ANALYSIS:A STOCHASTIC SIMULATION MODEL FOR BLUEBERRY INVESTMENT DECISIONS IN CHILE

    Directory of Open Access Journals (Sweden)

    GERMÁN LOBOS

    2015-12-01

    Full Text Available ABSTRACT The traditional method of net present value (NPV to analyze the economic profitability of an investment (based on a deterministic approach does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L. production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in

  11. Differentials in technical efficiency among smallholder cassava farmers in Central Madagascar: A Cobb Douglas stochastic frontier production approach

    Directory of Open Access Journals (Sweden)

    B.C. Okoye

    2016-12-01

    Full Text Available This study employed the Cobb–Douglas stochastic frontier production function to measure the level of technical efficiency among smallholder cassava farmers in Central Madagascar. A multi-stage random sampling technique was used to select 180 cassava farmers in the region and from this sample, input–output data were obtained using the cost route approach. The parameters of the stochastic frontier production function were estimated using the maximum likelihood method. The results of the analysis showed that individual farm-level technical efficiency was about 79%. The study found education, gender and age to be indirectly and significantly related to technical efficiency at a 1% level of probability, and to household size at a 5% level. The coefficient for occupational status was positive and highly significant at a 1% level. The results show that the study’s cassava farmers are not fully technically efficient, showing a mean score of .79%, and suggesting that opportunities still exist for increasing efficiency among the farmers. There is a need, therefore, to ensure that these farmers have access to the appropriate inputs, especially land and capital. The results also call for land reform policies to be introduced, aimed at making more land available, especially to the younger and full-time female farmers.

  12. Product Costing in FMT: Comparing Deterministic and Stochastic Models Using Computer-Based Simulation for an Actual Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen

    2000-01-01

    This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has ...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....

  13. Efficient Green's Function Reaction Dynamics (GFRD) simulations for diffusion-limited, reversible reactions

    Science.gov (United States)

    Bashardanesh, Zahedeh; Lötstedt, Per

    2018-03-01

    In diffusion controlled reversible bimolecular reactions in three dimensions, a dissociation step is typically followed by multiple, rapid re-association steps slowing down the simulations of such systems. In order to improve the efficiency, we first derive an exact Green's function describing the rate at which an isolated pair of particles undergoing reversible bimolecular reactions and unimolecular decay separates beyond an arbitrarily chosen distance. Then the Green's function is used in an algorithm for particle-based stochastic reaction-diffusion simulations for prediction of the dynamics of biochemical networks. The accuracy and efficiency of the algorithm are evaluated using a reversible reaction and a push-pull chemical network. The computational work is independent of the rates of the re-associations.

  14. Surface gravity wave effects in the oceanic boundary layer: large-eddy simulation with vortex force and stochastic breakers

    Science.gov (United States)

    Sullivan, Peter P.; McWilliams, James C.; Melville, W. Kendall

    The wind-driven stably stratified mid-latitude oceanic surface turbulent boundary layer is computationally simulated in the presence of a specified surface gravity-wave field. The gravity waves have broad wavenumber and frequency spectra typical of measured conditions in near-equilibrium with the mean wind speed. The simulation model is based on (i) an asymptotic theory for the conservative dynamical effects of waves on the wave-averaged boundary-layer currents and (ii) a boundary-layer forcing by a stochastic representation of the impulses and energy fluxes in a field of breaking waves. The wave influences are shown to be profound on both the mean current profile and turbulent statistics compared to a simulation without these wave influences and forced by an equivalent mean surface stress. As expected from previous studies with partial combinations of these wave influences, Langmuir circulations due to the wave-averaged vortex force make vertical eddy fluxes of momentum and material concentration much more efficient and non-local (i.e. with negative eddy viscosity near the surface), and they combine with the breakers to increase the turbulent energy and dissipation rate. They also combine in an unexpected positive feedback in which breaker-generated vorticity seeds the creation of a new Langmuir circulation and instigates a deep strong intermittent downwelling jet that penetrates through the boundary layer and increases the material entrainment rate at the base of the layer. These wave effects on the boundary layer are greater for smaller wave ages and higher mean wind speeds.

  15. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    Science.gov (United States)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)

  16. Stochastic Global Optimization and Its Applications with Fuzzy Adaptive Simulated Annealing

    CERN Document Server

    Aguiar e Oliveira Junior, Hime; Petraglia, Antonio; Rembold Petraglia, Mariane; Augusta Soares Machado, Maria

    2012-01-01

    Stochastic global optimization is a very important subject, that has applications in virtually all areas of science and technology. Therefore there is nothing more opportune than writing a book about a successful and mature algorithm that turned out to be a good tool in solving difficult problems. Here we present some techniques for solving  several problems by means of Fuzzy Adaptive Simulated Annealing (Fuzzy ASA), a fuzzy-controlled version of ASA, and by ASA itself. ASA is a sophisticated global optimization algorithm that is based upon ideas of the simulated annealing paradigm, coded in the C programming language and developed to statistically find the best global fit of a nonlinear constrained, non-convex cost function over a multi-dimensional space. By presenting detailed examples of its application we want to stimulate the reader’s intuition and make the use of Fuzzy ASA (or regular ASA) easier for everyone wishing to use these tools to solve problems. We kept formal mathematical requirements to a...

  17. A framework for stochastic simulation of distribution practices for hotel reservations

    Energy Technology Data Exchange (ETDEWEB)

    Halkos, George E.; Tsilika, Kyriaki D. [Laboratory of Operations Research, Department of Economics, University of Thessaly, Korai 43, 38 333, Volos (Greece)

    2015-03-10

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system.

  18. Exact and Approximate Stochastic Simulations of the MAPK Pathway and Comparisons of Simulations’ Results

    Directory of Open Access Journals (Sweden)

    Purutçuoǧlu Vilda

    2006-12-01

    Full Text Available The MAPK (mitogen-activated protein kinase or its synonymous ERK (extracellular signal regulated kinase pathway whose components are Ras, Raf, and MEK proteins with many biochemical links, is one of the major signalling systems involved in cellular growth control of eukaryotes including cell proliferation, transformation, differentiation, and apoptosis. In this study we describe the MAPK/ERK pathway via (quasi biochemical reactions and then implement the pathway by a stochastic Markov process. A novelty of our approach is to use multiple parametrizations in order to deal with molecules for which localization in the cell is an intricate part of the dynamic process and to describe the protein using different binding sites and various phosphorylations. We simulate the system by exact and different approximate simulations, e.g. via the Poisson τ-leap, the Binomial τ-leap and the diffusion methods, in which we introduce a new updating plan for dependent columns of the diffusion matrix. Finally we compare the results of different algorithms by the current biological knowledge and find out new relations about this complex system.

  19. A conditional stochastic weather generator for seasonal to multi-decadal simulations

    Science.gov (United States)

    Verdin, Andrew; Rajagopalan, Balaji; Kleiber, William; Podestá, Guillermo; Bert, Federico

    2018-01-01

    We present the application of a parametric stochastic weather generator within a nonstationary context, enabling simulations of weather sequences conditioned on interannual and multi-decadal trends. The generalized linear model framework of the weather generator allows any number of covariates to be included, such as large-scale climate indices, local climate information, seasonal precipitation and temperature, among others. Here we focus on the Salado A basin of the Argentine Pampas as a case study, but the methodology is portable to any region. We include domain-averaged (e.g., areal) seasonal total precipitation and mean maximum and minimum temperatures as covariates for conditional simulation. Areal covariates are motivated by a principal component analysis that indicates the seasonal spatial average is the dominant mode of variability across the domain. We find this modification to be effective in capturing the nonstationarity prevalent in interseasonal precipitation and temperature data. We further illustrate the ability of this weather generator to act as a spatiotemporal downscaler of seasonal forecasts and multidecadal projections, both of which are generally of coarse resolution.

  20. FluTE, a publicly available stochastic influenza epidemic simulation model.

    Directory of Open Access Journals (Sweden)

    Dennis L Chao

    2010-01-01

    Full Text Available Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2 and 2009 pandemic A(H1N1 influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.

  1. FluTE, a publicly available stochastic influenza epidemic simulation model.

    Science.gov (United States)

    Chao, Dennis L; Halloran, M Elizabeth; Obenchain, Valerie J; Longini, Ira M

    2010-01-29

    Mathematical and computer models of epidemics have contributed to our understanding of the spread of infectious disease and the measures needed to contain or mitigate them. To help prepare for future influenza seasonal epidemics or pandemics, we developed a new stochastic model of the spread of influenza across a large population. Individuals in this model have realistic social contact networks, and transmission and infections are based on the current state of knowledge of the natural history of influenza. The model has been calibrated so that outcomes are consistent with the 1957/1958 Asian A(H2N2) and 2009 pandemic A(H1N1) influenza viruses. We present examples of how this model can be used to study the dynamics of influenza epidemics in the United States and simulate how to mitigate or delay them using pharmaceutical interventions and social distancing measures. Computer simulation models play an essential role in informing public policy and evaluating pandemic preparedness plans. We have made the source code of this model publicly available to encourage its use and further development.

  2. A framework for stochastic simulation of distribution practices for hotel reservations

    International Nuclear Information System (INIS)

    Halkos, George E.; Tsilika, Kyriaki D.

    2015-01-01

    The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries. We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system

  3. Pricing stock options under stochastic volatility and interest rates with efficient method of moments estimation

    NARCIS (Netherlands)

    Jiang, George J.; Sluis, Pieter J. van der

    1999-01-01

    While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using

  4. An efficient approach based on radial basis functions for solving stochastic fractional differential equations

    Directory of Open Access Journals (Sweden)

    N. Ahmadi

    2017-02-01

    Full Text Available Abstract In this paper, we present a collocation method based on Gaussian Radial Basis Functions (RBFs for approximating the solution of stochastic fractional differential equations (SFDEs. In this equation the fractional derivative is considered in the Caputo sense. Also we prove the existence and uniqueness of the presented method. Numerical examples confirm the proficiency of the method.

  5. A stochastic model for simulation of the economic consequences of bovine virus diarrhoea virus infection in a dairy herd

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten; Houe, H.

    1995-01-01

    A dynamic, stochastic model simulating the technical and economic consequences of bovine virus diarrhoea virus (BVDV) infections for a dairy cattle herd for use on a personal computer was developed. The production and state changes of the herd were simulated by state changes of the individual cows...... variables describing biologic and management variables including 21 decision variables describing the effect of BVDV infection on the production of the individual animal. Two markedly different scenarios were simulated to demonstrate the behaviour of the developed model and the potentials of the applied...... and heifers. All discrete events at the cow level were triggered stochastically. Each cow and heifer was characterized by state variables such as stage of lactation, parity, oestrous status, decision for culling, milk production potential, and immune status for BVDV. The model was controlled by 170 decision...

  6. Stochastic modeling of complex systems. From the theoretical foundations to the simulation of atmospheric wind fields; Stochastische Modellierung komplexer Systeme. Von den theoretischen Grundlagen zur Simulation atmosphaerischer Windfelder

    Energy Technology Data Exchange (ETDEWEB)

    Kleinhans, David

    2008-06-25

    This thesis investigates various methods for the description and modelling of complex systems. Complex systems can frequently be described by means of the dynamics of a small set of order parameters, that complies with stochastic differential equations. Based on a method for the direct estimation of drift and diffusion functions from measured data by Friedrich and Peinke the focus at first is on the statistical estimation of the dynamics of order parameters. In particular an existing iterative method for data analysis purposes is re ned by means of a 'Maximum-Likelihood' approach. Then the connection between stochastic processes 'in time' and 'in scale' is investigated. Moreover the influence of external noise sources on Markov properties is studied. The compliance with Markov properties is essential for the application of efficient data analysis techniques. It turns out, that Markov properties generally are seriously spoiled by the influence of external noise such as measurement noise. Then 'Continuous Time Random Walks' (CTRWs) are discussed, that form an extension of classic random processes. CTRWs are feasible for the modelling of non- Markov processes exhibiting anomalous difusion properties in the ensemble sense, that frequently are observed in complex amorphous media. At the first instance Fogedby's continuous description of CTRWs is investigated. Based on Fogedby's approach then an algorithm for the generation of continuous trajectories of CTRWs is developed. Additionally, a physical framework for the microscopic dynamics of so-called 'trapping models' is introduced, that are currently used as models for the glass transition. The applicability of CTRWs for simulations of turbulent flows has been an open question. For this reason, the use of CTRWs for the generation of turbulent inflow wind fields for wind turbine simulations is discussed. At first an extensive introduction the special needs of such

  7. Efficient Collision Detection in a Simulated Hydrocyclone

    NARCIS (Netherlands)

    van Eijkeren, D.F.; Krebs, T.; Hoeijmakers, Hendrik Willem Marie

    2015-01-01

    Hydrocyclones enhance oil–water separation efficiency compared to conventional separation methods. An efficient collision detection scheme with Np ln Np dependency on the number of particles is proposed. The scheme is developed to investigate the importance of particle–particle interaction for flow

  8. Stochastic Frontier Approach and Data Envelopment Analysis to Total Factor Productivity and Efficiency Measurement of Bangladeshi Rice

    Science.gov (United States)

    Hossain, Md. Kamrul; Kamil, Anton Abdulbasah; Baten, Md. Azizul; Mustafa, Adli

    2012-01-01

    The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989–2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500

  9. Stochastic Frontier approach and Data Envelopment Analysis to Total Factor Productivity and efficiency measurement of Bangladeshi rice.

    Science.gov (United States)

    Hossain, Md Kamrul; Kamil, Anton Abdulbasah; Baten, Md Azizul; Mustafa, Adli

    2012-01-01

    The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989-2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation.

  10. StochPy: A Comprehensive, User-Friendly Tool for Simulating Stochastic Biological Processes

    NARCIS (Netherlands)

    T.R. Maarleveld (Timo); B.G. Olivier (Brett); F.J. Bruggeman (Frank)

    2013-01-01

    htmlabstractSingle-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models

  11. Variational mean-field algorithm for efficient inference in large systems of stochastic differential equations.

    Science.gov (United States)

    Vrettas, Michail D; Opper, Manfred; Cornford, Dan

    2015-01-01

    This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.

  12. A stochastic model for simulation of the economic consequences of bovine virus diarrhoea virus infection in a dairy herd

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten; Houe, H.

    1995-01-01

    A dynamic, stochastic model simulating the technical and economic consequences of bovine virus diarrhoea virus (BVDV) infections for a dairy cattle herd for use on a personal computer was developed. The production and state changes of the herd were simulated by state changes of the individual cows...... variables describing biologic and management variables including 21 decision variables describing the effect of BVDV infection on the production of the individual animal. Two markedly different scenarios were simulated to demonstrate the behaviour of the developed model and the potentials of the applied...

  13. Efficient Co-Simulation of Multicore Systems

    DEFF Research Database (Denmark)

    Brock-Nannestad, Laust; Karlsson, Sven

    2011-01-01

    the hardware state of a multicore design while it is running on an FPGA. With minimal changes to the design and using only the built-in JTAG programming and debug- ging facilities, we describe how to transfer the state from an FPGA to a simulator. We also show how the state can be transferred back from...... the simulator to FPGA. Given that the design runs in real-time on the FPGA, the end result is speed improvements of orders of magnitude over traditional pure software simulation....

  14. Multi-objective optimisation with stochastic discrete-event simulation in retail banking: a case study

    Directory of Open Access Journals (Sweden)

    E Scholtz

    2012-12-01

    Full Text Available The cash management of an autoteller machine (ATM is a multi-objective optimisation problem which aims to maximise the service level provided to customers at minimum cost. This paper focus on improved cash management in a section of the South African retail banking industry, for which a decision support system (DSS was developed. This DSS integrates four Operations Research (OR methods: the vehicle routing problem (VRP, the continuous review policy for inventory management, the knapsack problem and stochastic, discrete-event simulation. The DSS was applied to an ATM network in the Eastern Cape, South Africa, to investigate 90 different scenarios. Results show that the application of a formal vehicle routing method consistently yields higher service levels at lower cost when compared to two other routing approaches, in conjunction with selected ATM reorder levels and a knapsack-based notes dispensing algorithm. It is concluded that the use of vehicle routing methods is especially beneficial when the bank has substantial control over transportation cost.

  15. A stochastic simulation procedure for selecting herbicides with minimum environmental impact.

    Science.gov (United States)

    Giudice, Ben D; Massoudieh, Arash; Huang, Xinjiang; Young, Thomas M

    2008-01-15

    A mathematical environmental transport model of roadside applied herbicides at the site scale (approximately 100 m) was stochastically applied using a Monte-Carlo technique to simulate the concentrations of 33 herbicides in stormwater runoff. Field surveys, laboratory sorption data, and literature data were used to generate probability distribution functions for model input parameters to allow extrapolation of the model to the regional scale. Predicted concentrations were compared to EPA acute toxicity end points for aquatic organisms to determine the frequency of potentiallytoxic outcomes. Results are presented for three geographical regions in California and two highway geometries. For a given herbicide, frequencies of potential toxicity (FPTs) varied by as much as 36% between region and highway type. Of 33 herbicides modeled, 16 exhibit average FPTs greater than 50% at the maximum herbicide application rate, while 20 exhibit average FPTs less than 50% at the minimum herbicide application rate. Based on these FPTs and current usage statistics, selected herbicides were determined to be more environmentally acceptable than others in terms of acute toxicity and other documented environmental effects. This analysis creates a decision support system that can be used to evaluate the relative water quality impacts of varied herbicide application practices.

  16. Stochastic Simulations of Colloid-Facilitated Transport for Long Time and Space Scales

    Science.gov (United States)

    Painter, S.; Pickett, D.; Cvetkovic, V.

    2004-12-01

    Although it is widely recognized that naturally occurring inorganic colloids can potentially enhance the transport of radionuclides in the subsurface, comparatively few analyses have considered the long times and large travel distances associated with potential nuclear waste repositories. One-dimensional transient simulations in a stochastic Lagrangian framework are used to explore model and parameter sensitivities for colloid-facilitated transport at large scales. The model accounts for (i) advection and dispersion of radionuclides and colloids, (ii) radionuclide decay, (iii) exchange of radionuclides among colloid-bound, dissolved, and fixed substrate phases, and (iv) attachment and detachment of colloids to the fixed substrate. Kinetics of the exchanges between dissolved and colloid-bound states are addressed using linear and non-linear models. Generic sensitivity studies addressing both fractured and granular aquifers are considered, as is an example based on the groundwater transport pathway for the potential repository at Yucca Mountain, Nevada. In the absence of mitigating factors such as permanent filtration of colloids, transport may be enhanced over the situation without colloids, but only for strongly sorbing radionuclides. Mass transfer between solution and immobilized colloids makes colloid retardation relatively ineffective at reducing facilitated transport except when the retardation factor is large. Results are particularly sensitive to the rate of desorption from colloids, a parameter that is difficult to measure with short-duration experiments. This paper is an independent product of the CNWRA and does not necessarily reflect the view or regulatory position of the U.S. Nuclear Regulatory Commission.

  17. Stochastic calculus in physics

    International Nuclear Information System (INIS)

    Fox, R.F.

    1987-01-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations

  18. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    Science.gov (United States)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  19. Subcellular Location of PKA Controls Striatal Plasticity: Stochastic Simulations in Spiny Dendrites

    Science.gov (United States)

    Oliveira, Rodrigo F.; Kim, MyungSook; Blackwell, Kim T.

    2012-01-01

    Dopamine release in the striatum has been implicated in various forms of reward dependent learning. Dopamine leads to production of cAMP and activation of protein kinase A (PKA), which are involved in striatal synaptic plasticity and learning. PKA and its protein targets are not diffusely located throughout the neuron, but are confined to various subcellular compartments by anchoring molecules such as A-Kinase Anchoring Proteins (AKAPs). Experiments have shown that blocking the interaction of PKA with AKAPs disrupts its subcellular location and prevents LTP in the hippocampus and striatum; however, these experiments have not revealed whether the critical function of anchoring is to locate PKA near the cAMP that activates it or near its targets, such as AMPA receptors located in the post-synaptic density. We have developed a large scale stochastic reaction-diffusion model of signaling pathways in a medium spiny projection neuron dendrite with spines, based on published biochemical measurements, to investigate this question and to evaluate whether dopamine signaling exhibits spatial specificity post-synaptically. The model was stimulated with dopamine pulses mimicking those recorded in response to reward. Simulations show that PKA colocalization with adenylate cyclase, either in the spine head or in the dendrite, leads to greater phosphorylation of DARPP-32 Thr34 and AMPA receptor GluA1 Ser845 than when PKA is anchored away from adenylate cyclase. Simulations further demonstrate that though cAMP exhibits a strong spatial gradient, diffusible DARPP-32 facilitates the spread of PKA activity, suggesting that additional inactivation mechanisms are required to produce spatial specificity of PKA activity. PMID:22346744

  20. Stochastic plasma heating by electrostatic waves: a comparison between a particle-in-cell simulation and a laboratory experiment

    International Nuclear Information System (INIS)

    Fivaz, M.; Fasoli, A.; Appert, K.; Trans, T.M.; Tran, M.Q.; Skiff, F.

    1993-08-01

    Dynamical chaos is produced by the interaction between plasma particles and two electrostatic waves. Experiments performed in a linear magnetized plasma and a 1D particle-in-cell simulation agree qualitatively: above a threshold wave amplitude, ion stochastic diffusion and heating occur on a fast time scale. Self-consistency appears to limit the extent of the heating process. (author) 5 figs., 18 refs

  1. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  2. Simulation of Precipitation Extremes Using a Stochastic Convective Parameterization in the NCAR CAM5 Under Different Resolutions

    Science.gov (United States)

    Wang, Yong; Zhang, Guang J.; He, Yu-Jun

    2017-12-01

    With the incorporation of the Plant-Craig stochastic deep convection scheme into the Zhang-McFarlane deterministic parameterization in the Community Atmospheric Model version 5 (CAM5), its impact on extreme precipitation at different resolutions (2°, 1°, and 0.5°) is investigated. CAM5 with the stochastic deep convection scheme (experiment (EXP)) simulates the precipitation extreme indices better than the standard version (control). At 2° and 1° resolutions, EXP increases high percentile (>99th) daily precipitation over the United States, Europe, and China, resulting in a better agreement with observations. However, at 0.5° resolution, due to enhanced grid-scale precipitation with increasing resolution, EXP overestimates extreme precipitation over southeastern U.S. and eastern Europe. The reduced biases in EXP at each resolution benefit from a broader probability distribution function of convective precipitation intensity simulated. Among EXP simulations at different resolutions, if the spatial averaging area over which input quantities used in convective closure are spatially averaged in the stochastic convection scheme is comparable, the modeled convective precipitation intensity decreases with increasing resolution, when gridded to the same resolution, while the total precipitation is not sensitive to model resolution, exhibiting some degree of scale-awareness. Sensitivity tests show that for the same resolution, increasing the size of spatial averaging area decreases convective precipitation but increases the grid-scale precipitation.

  3. Objective mapping of observed sub-surface mesoscale cold core eddy in the Bay of Bengal by stochastic inverse technique with tomographically simulated travel times

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.; Sridevi, B.; Maneesha, K.; SujithKumar, S.; Prasanna, P.L.; Murthy, K.S.R.

    of Bengal during south-west monsoon season and explore possibility to reconstruct the acoustic profile of the eddy by Stochastic Inverse Technique. A simulation experiment on forward and inverse problems for observed sound velocity perturbation field has...

  4. Efficient SDH Computation In Molecular Simulations Data.

    Science.gov (United States)

    Tu, Yi-Cheng; Chen, Shaoping; Pandit, Sagar; Kumar, Anand; Grupcev, Vladimir

    2012-10-01

    Analysis of large particle or molecular simulation data is integral part of the basic-science research community. It often involves computing functions such as point-to-point interactions of particles. Spatial distance histogram (SDH) is one such vital computation in scientific discovery. SDH is frequently used to compute Radial Distribution Function (RDF), and it takes quadratic time to compute using naive approach. Naive SDH computation is even more expensive as it is computed continuously over certain period of time to analyze simulation systems. Tree-based SDH computation is a popular approach. In this paper we look at different tree-based SDH computation techniques and briefly discuss about their performance. We present different strategies to improve the performance of these techniques. Specifically, we study the density map (DM) based SDH computation techniques. A DM is essentially a grid dividing simulated space into cells (3D cubes) of equal size (volume), which can be easily implemented by augmenting a Quad-tree (or Oct-tree) index. DMs are used in various configurations to compute SDH continuously over snapshots of the simulation system. The performance improvements using some of these configurations is presented in this paper. We also present the effect of utilizing computation power of Graphics Processing Units (GPUs) in computing SDH.

  5. Efficient simulations of multicounter machines (Preliminary version)

    NARCIS (Netherlands)

    P.M.B. Vitányi (Paul)

    1982-01-01

    textabstractAn oblivious 1-tape Turing machine can on-line simulate a multicounter machine in linear time and logarithmic space. This leads to a linear cost combinational logic network implementing the first n steps of a multicounter machine and also to a linear time/logarithmic space on-line

  6. The Relative Efficiencies of Research Universities of Science and Technology in China: Based on the Data Envelopment Analysis and Stochastic Frontier Analysis

    Science.gov (United States)

    Chuanyi, Wang; Xiaohong, Lv; Shikui, Zhao

    2016-01-01

    This paper applies data envelopment analysis (DEA) and stochastic frontier analysis (SFA) to explore the relative efficiency of China's research universities of science and technology. According to the finding, when talent training is the only output, the efficiency of research universities of science and technology is far lower than that of…

  7. TECHNICAL EFFICIENCY IN AGRICULTURAL PRODUCTION AND ACCESS TO CREDIT IN WEST BENGAL, INDIA: A STOCHASTIC FRONTIER APPROACH

    Directory of Open Access Journals (Sweden)

    Arindam Laha

    2013-10-01

    Full Text Available Access to credit significantly influences land leasing decisions, and thus ultimately has a significant implication on ensuring efficiency in agricultural production. This paper attempts to examine the instrumental role of credit in ensuring efficiency in the context of West Bengal agriculture by disaggregating the analysis for two mutually exclusive groups: bank customers and non-bank customers. Empirical analysis based on Stochastic Frontier Analysis confirms that farming households having access to formal credit are, in general, practicing cultivation more efficiently by channelizing credit in the utilization of agricultural inputs. In addition, contractual arrangements and operated farm size are found to be significant determinants of observed variation of technical efficiency estimates in case of bank customers. In the context of higher probability of access to credit in case of fixed rent tenants and large farmers, it can be argued that farmers having access to credit achieved a higher efficiency level by adopting the improved technology in agricultural production. Thus an access to institutional credit would provide an incentive to the farmers to adjust the operational land by the mechanism of tenurial contract so as to bring about efficiency in agricultural production.

  8. Stochastic Simulation of Integrated Circuits with Nonlinear Black-Box Components via Augmented Deterministic Equivalents

    Directory of Open Access Journals (Sweden)

    MANFREDI, P.

    2014-11-01

    Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.

  9. Deterministic and stochastic simulation and analysis of biochemical reaction networks the lactose operon example.

    Science.gov (United States)

    Yildirim, Necmettin; Kazanci, Caner

    2011-01-01

    A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. © 2011 Elsevier Inc. All rights reserved.

  10. Requirements for efficient cell-type proportioning: regulatory timescales, stochasticity and lateral inhibition

    Science.gov (United States)

    Pfeuty, B.; Kaneko, K.

    2016-04-01

    The proper functioning of multicellular organisms requires the robust establishment of precise proportions between distinct cell types. This developmental differentiation process typically involves intracellular regulatory and stochastic mechanisms to generate cell-fate diversity as well as intercellular signaling mechanisms to coordinate cell-fate decisions at tissue level. We thus surmise that key insights about the developmental regulation of cell-type proportion can be captured by the modeling study of clustering dynamics in population of inhibitory-coupled noisy bistable systems. This general class of dynamical system is shown to exhibit a very stable two-cluster state, but also metastability, collective oscillations or noise-induced state hopping, which can prevent from timely and reliably reaching a robust and well-proportioned clustered state. To circumvent these obstacles or to avoid fine-tuning, we highlight a general strategy based on dual-time positive feedback loops, such as mediated through transcriptional versus epigenetic mechanisms, which improves proportion regulation by coordinating early and flexible lineage priming with late and firm commitment. This result sheds new light on the respective and cooperative roles of multiple regulatory feedback, stochasticity and lateral inhibition in developmental dynamics.

  11. Efficient computation of discounted asymmetric information zero-sum stochastic games

    KAUST Repository

    Li, Lichun

    2015-12-15

    In asymmetric information zero-sum games, one player has superior information about the game over the other. Asymmetric information games are particularly relevant for security problems, e.g., where an attacker knows its own skill set or alternatively a system administrator knows the state of its resources. In such settings, the informed player is faced with the tradeoff of exploiting its superior information at the cost of revealing its superior information. This tradeoff is typically addressed through randomization, in an effort to keep the uninformed player informationally off balance. A lingering issue is the explicit computation of such strategies. This paper, building on prior work for repeated games, presents an LP formulation to compute suboptimal strategies for the informed player in discounted asymmetric information stochastic games in which state transitions are not affected by the uninformed player. Furthermore, the paper presents bounds between the security level guaranteed by the sub-optimal strategy and the optimal value. The results are illustrated on a stochastic intrusion detection problem.

  12. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  13. Stochastic Simulation of Isotopic Exchange Mechanisms for Fe(II)-Catalyzed Recrystallization of Goethite

    Energy Technology Data Exchange (ETDEWEB)

    Zarzycki, Piotr [Energy; Institute; Rosso, Kevin M. [Pacific Northwest

    2017-06-15

    Understanding Fe(II)-catalyzed transformations of Fe(III)- (oxyhydr)oxides is critical for correctly interpreting stable isotopic distributions and for predicting the fate of metal ions in the environment. Recent Fe isotopic tracer experiments have shown that goethite undergoes rapid recrystallization without phase change when exposed to aqueous Fe(II). The proposed explanation is oxidation of sorbed Fe(II) and reductive Fe(II) release coupled 1:1 by electron conduction through crystallites. Given the availability of two tracer exchange data sets that explore pH and particle size effects (e.g., Handler et al. Environ. Sci. Technol. 2014, 48, 11302-11311; Joshi and Gorski Environ. Sci. Technol. 2016, 50, 7315-7324), we developed a stochastic simulation that exactly mimics these experiments, while imposing the 1:1 constraint. We find that all data can be represented by this model, and unifying mechanistic information emerges. At pH 7.5 a rapid initial exchange is followed by slower exchange, consistent with mixed surface- and diffusion-limited kinetics arising from prominent particle aggregation. At pH 5.0 where aggregation and net Fe(II) sorption are minimal, that exchange is quantitatively proportional to available particle surface area and the density of sorbed Fe(II) is more readily evident. Our analysis reveals a fundamental atom exchange rate of ~10-5 Fe nm-2 s-1, commensurate with some of the reported reductive dissolution rates of goethite, suggesting Fe(II) release is the rate-limiting step in the conduction mechanism during recrystallization.

  14. Stochastic simulation experiment to assess radar rainfall retrieval uncertainties associated with attenuation and its correction

    Directory of Open Access Journals (Sweden)

    R. Uijlenhoet

    2008-03-01

    Full Text Available As rainfall constitutes the main source of water for the terrestrial hydrological processes, accurate and reliable measurement and prediction of its spatial and temporal distribution over a wide range of scales is an important goal for hydrology. We investigate the potential of ground-based weather radar to provide such measurements through a theoretical analysis of some of the associated observation uncertainties. A stochastic model of range profiles of raindrop size distributions is employed in a Monte Carlo simulation experiment to investigate the rainfall retrieval uncertainties associated with weather radars operating at X-, C-, and S-band. We focus in particular on the errors and uncertainties associated with rain-induced signal attenuation and its correction for incoherent, non-polarimetric, single-frequency, operational weather radars. The performance of two attenuation correction schemes, the (forward Hitschfeld-Bordan algorithm and the (backward Marzoug-Amayenc algorithm, is analyzed for both moderate (assuming a 50 km path length and intense Mediterranean rainfall (for a 30 km path. A comparison shows that the backward correction algorithm is more stable and accurate than the forward algorithm (with a bias in the order of a few percent for the former, compared to tens of percent for the latter, provided reliable estimates of the total path-integrated attenuation are available. Moreover, the bias and root mean square error associated with each algorithm are quantified as a function of path-averaged rain rate and distance from the radar in order to provide a plausible order of magnitude for the uncertainty in radar-retrieved rain rates for hydrological applications.

  15. Dynamical Modelling, Stochastic Simulation and Optimization in the Context of Damage Tolerant Design

    Directory of Open Access Journals (Sweden)

    Sergio Butkewitsch

    2006-01-01

    Full Text Available This paper addresses the situation in which some form of damage is induced by cyclic mechanical stresses yielded by the vibratory motion of a system whose dynamical behaviour is, in turn, affected by the evolution of the damage. It is assumed that both phenomena, vibration and damage propagation, can be modeled by means of time depended equations of motion whose coupled solution is sought. A brief discussion about the damage tolerant design philosophy for aircraft structures is presented at the introduction, emphasizing the importance of the accurate definition of inspection intervals and, for this sake, the need of a representative damage propagation model accounting for the actual loading environment in which a structure may operate. For the purpose of illustration, the finite element model of a cantilever beam is formulated, providing that the stiffness matrix can be updated as long as a crack of an assumed initial length spreads in a given location of the beam according to a proper propagation model. This way, it is possible to track how the mechanical vibration, through its varying amplitude stress field, activates and develops the fatigue failure mechanism. Conversely, it is also possible to address how the effect of the fatigue induced stiffness degradation influences the motion of the beam, closing the loop for the analysis of a coupled vibration-degradation dynamical phenomenon. In the possession of this working model, stochastic simulation of the beam behaviour is developed, aiming at the identification of the most influential parameters and at the characterization of the probability distributions of the relevant responses of interest. The knowledge of the parameters and responses allows for the formulation of optimization problems aiming at the improvement of the beam robustness with respect to the fatigue induced stiffness degradation. The overall results are presented and analyzed, conducting to the conclusions and outline of future

  16. Stochastic Simulation of Isotopic Exchange Mechanisms for Fe(II)-Catalyzed Recrystallization of Goethite.

    Science.gov (United States)

    Zarzycki, Piotr; Rosso, Kevin M

    2017-07-05

    Understanding Fe(II)-catalyzed transformations of Fe(III)-(oxyhydr)oxides is critical for correctly interpreting stable isotopic distributions and for predicting the fate of metal ions in the environment. Recent Fe isotopic tracer experiments have shown that goethite undergoes rapid recrystallization without phase change when exposed to aqueous Fe(II). The proposed explanation is oxidation of sorbed Fe(II) and reductive Fe(II) release coupled 1:1 by electron conduction through crystallites. Given the availability of two tracer exchange data sets that explore pH and particle size effects (e.g., Handler et al. Environ. Sci. Technol. 2014 , 48 , 11302 - 11311 ; Joshi and Gorski Environ. Sci. Technol. 2016 , 50 , 7315 - 7324 ), we developed a stochastic simulation that exactly mimics these experiments, while imposing the 1:1 constraint. We find that all data can be represented by this model, and unifying mechanistic information emerges. At pH 7.5 a rapid initial exchange is followed by slower exchange, consistent with mixed surface- and diffusion-limited kinetics arising from prominent particle aggregation. At pH 5.0 where aggregation and net Fe(II) sorption are minimal, that exchange is quantitatively proportional to available particle surface area and the density of sorbed Fe(II) is more readily evident. Our analysis reveals a fundamental atom exchange rate of ∼10 -5 Fe nm -2 s -1 , commensurate with some of the reported reductive dissolution rates of goethite, suggesting Fe(II) release is the rate-limiting step in the conduction mechanism during recrystallization.

  17. Hit-And-Run enables efficient weight generation for simulation-based multiple criteria decision analysis

    NARCIS (Netherlands)

    Tervonen, Tommi; van Valkenhoef, Gert; Basturk, Nalan; Postmus, Douwe

    2013-01-01

    Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the

  18. Spectrum-efficient multi-channel design for coexisting IEEE 802.15.4 networks: A stochastic geometry approach

    KAUST Repository

    Elsawy, Hesham

    2014-07-01

    For networks with random topologies (e.g., wireless ad-hoc and sensor networks) and dynamically varying channel gains, choosing the long term operating parameters that optimize the network performance metrics is very challenging. In this paper, we use stochastic geometry analysis to develop a novel framework to design spectrum-efficient multi-channel random wireless networks based on the IEEE 802.15.4 standard. The proposed framework maximizes both spatial and time domain frequency utilization under channel gain uncertainties to minimize the number of frequency channels required to accommodate a certain population of coexisting IEEE 802.15.4 networks. The performance metrics are the outage probability and the self admission failure probability. We relax the single channel assumption that has been used traditionally in the stochastic geometry analysis. We show that the intensity of the admitted networks does not increase linearly with the number of channels and the rate of increase of the intensity of the admitted networks decreases with the number of channels. By using graph theory, we obtain the minimum required number of channels to accommodate a certain intensity of coexisting networks under a self admission failure probability constraint. To this end, we design a superframe structure for the coexisting IEEE 802.15.4 networks and a method for time-domain interference alignment. © 2002-2012 IEEE.

  19. Efficient Simulation of the Outage Probability of Multihop Systems

    KAUST Repository

    Ben Issaid, Chaouki

    2017-10-23

    In this paper, we present an efficient importance sampling estimator for the evaluation of the outage probability of multihop systems with amplify-and-forward channel state-information-assisted. The proposed estimator is endowed with the bounded relative error property. Simulation results show a significant reduction in terms of number of simulation runs compared to naive Monte Carlo.

  20. Calibration with respect to hydraulic head measurements in stochastic simulation of groundwater flow - a numerical experiment using MATLAB

    International Nuclear Information System (INIS)

    Eriksson, L.O.; Oppelstrup, J.

    1994-12-01

    A simulator for 2D stochastic continuum simulation and inverse modelling of groundwater flow has been developed. The simulator is well suited for method evaluation and what-if simulation and written in MATLAB. Conductivity fields are generated by unconditional simulation, conditional simulation on measured conductivities and calibration on both steady-state head measurements and transient head histories. The fields can also include fracture zones and zones with different mean conductivities. Statistics of conductivity fields and particle travel times are recorded in Monte-Carlo simulations. The calibration uses the pilot point technique, an inverse technique proposed by RamaRao and LaVenue. Several Kriging procedures are implemented, among others Kriging neighborhoods. In cases where the expectation of the log-conductivity in the truth field is known the nonbias conditions can be omitted, which will make the variance in the conditionally simulated conductivity fields smaller. A simulation experiment, resembling the initial stages of a site investigation and devised in collaboration with SKB, is performed and interpreted. The results obtained in the present study show less uncertainty than in our preceding study. This is mainly due to the modification of the Kriging procedure but also to the use of more data. Still the large uncertainty in cases of sparse data is apparent. The variogram represents essential characteristics of the conductivity field. Thus, even unconditional simulations take account of important information. Significant improvements in variance by further conditioning will be obtained only as the number of data becomes much larger. 16 refs, 26 figs

  1. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  2. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  3. Development and Application of AN Efficient Method for the Solution of Stochastic Activity Networks with Deterministic Activities.

    Science.gov (United States)

    Malhis, Luai Mohammed

    1996-08-01

    Modeling and evaluation of communication and computing systems is an important undertaking. In many cases, large -scale systems are designed in an ad-hoc manner, with validation (or disappointment regarding) system performance coming only after an implementation is made. This does not need to be the case. Modern modeling tools and techniques can yield accurate performance predictions that can be used in the design process. Stochastic activity networks (SANs), stochastic Petri nets (SPNs) and analytic solution methods permit specification and fast solution of many complex system models. To enhance the modeling power of SANs (SPNs), new steady-state analysis methods have been proposed for SAN (SPN) models that include non-exponential activities (transitions). The underlying stochastic process is a Markov regenerative process (MRP) when at most one non -exponential activity (transition) is enabled in each marking. Time -efficient algorithms for constructing the Markov regenerative process have been developed. However, the space required to solve such models is often extremely large. This largeness is due to the large number of transitions in the MRP. Traditional analysis methods require all these transitions be stored in memory for efficient computation. If the size of available memory is smaller than that needed to store these transitions, a time-efficient computation is impossible using these methods. To use this class of SANs to model real systems, the space complexity of MRP analysis algorithms must be reduced. In this thesis, we propose a new steady-state analysis method that is time and space efficient. The new method takes advantage of the structure of the underlying process to reduce both computation time and required memory. The performance of the proposed method is compared to existing methods using several SAN examples. In addition, the ability to model real systems using SANs that include exponential and deterministic activities is demonstrated by modeling

  4. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  5. Effects of deprotonation efficiency of protected units on line edge roughness and stochastic defect generation in chemically amplified resist processes for 11 nm node of extreme ultraviolet lithography

    Science.gov (United States)

    Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro

    2014-11-01

    The deprotonation of polymer radical cations plays an important role in the acid generation in chemically amplified resists upon exposure to ionizing radiation. In this study, the effects of the deprotonation efficiency of protected units of a resist polymer on line edge roughness (LER) and stochastic defect generation were investigated. The suppression of stochastic effects is essential for the realization of high-volume production of semiconductor devices with an 11 nm critical dimension using extreme ultraviolet (EUV) lithography. By increasing the deprotonation efficiency, the chemical contrast (latent image quality) was improved; however, the protected unit number fluctuation did not significantly change. Consequently, LER and the probability of stochastic defect generation were reduced. This effect was prominent when the protection ratio was close to 100%.

  6. Stochastic strong ground motion simulations for the intermediate-depth earthquakes of the south Aegean subduction zone

    Science.gov (United States)

    Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis

    2015-04-01

    We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the

  7. Simulating transmission and control of Taenia solium infections using a reed-frost stochastic model

    DEFF Research Database (Denmark)

    Kyvsgaard, Niels Chr.; Johansen, Maria Vang; Carabin, Hélène

    2007-01-01

    The transmission dynamics of the human-pig zoonotic cestode Taenia solium are explored with both deterministic and stochastic versions of a modified Reed-Frost model. This model, originally developed for microparasitic infections (i.e. bacteria, viruses and protozoa), assumes that random contacts...... humans eating under-cooked pork meat harbouring T. solium metacestodes. Deterministic models of each scenario were first run, followed by stochastic versions of the models to assess the likelihood of infection elimination in the small population modelled. The effects of three groups of interventions were...

  8. Stochastic stresses in granular matter simulated by dripping identical ellipses into plane silo

    DEFF Research Database (Denmark)

    Berntsen, Kasper Nikolaj; Ditlevsen, Ove Dalager

    2000-01-01

    proposed by Janssen in 1895. The stochastic Janssen factor model is shown to be fairly consistentwith the observations from which the mean and the intensity of the white noise is estimated by the method of maximumlikelihood using the properties of the gamma-distribution. Two wall friction coefficients...

  9. An Application of a Stochastic Semi-Continuous Simulation Method for Flood Frequency Analysis: A Case Study in Slovakia

    Science.gov (United States)

    Valent, Peter; Paquet, Emmanuel

    2017-09-01

    A reliable estimate of extreme flood characteristics has always been an active topic in hydrological research. Over the decades a large number of approaches and their modifications have been proposed and used, with various methods utilizing continuous simulation of catchment runoff, being the subject of the most intensive research in the last decade. In this paper a new and promising stochastic semi-continuous method is used to estimate extreme discharges in two mountainous Slovak catchments of the rivers Váh and Hron, in which snow-melt processes need to be taken into account. The SCHADEX method used, couples a precipitation probabilistic model with a rainfall-runoff model used to both continuously simulate catchment hydrological conditions and to transform generated synthetic rainfall events into corresponding discharges. The stochastic nature of the method means that a wide range of synthetic rainfall events were simulated on various historical catchment conditions, taking into account not only the saturation of soil, but also the amount of snow accumulated in the catchment. The results showed that the SCHADEX extreme discharge estimates with return periods of up to 100 years were comparable to those estimated by statistical approaches. In addition, two reconstructed historical floods with corresponding return periods of 100 and 1000 years were compared to the SCHADEX estimates. The results confirmed the usability of the method for estimating design discharges with a recurrence interval of more than 100 years and its applicability in Slovak conditions.

  10. Economic impact of clinical mastitis in a dairy herd assessed by stochastic simulation using different methods to model yield losses

    DEFF Research Database (Denmark)

    Hagnestam-Nielsen, Christel; Østergaard, Søren

    2009-01-01

    reflects the fact that in different stages of lactation, CM gives rise to different yield-loss patterns or postulates just one type of yield-loss pattern irrespective of when, during lactation, CM occurs. A dynamic and stochastic simulation model, SimHerd, was used to study the effects of CM in a herd...... a single yield-loss pattern irrespective of when, during the lactation period, the cow develops CM - was compared with a new modelling strategy in which CM was assumed to affect production differently depending on its lactational timing. The effect of the choice of reference level when estimating yield...

  11. The ground motion simulation of Kangding Mw6.0,2014 by the stochastic finite-fault model

    Science.gov (United States)

    Zhang, Lifang; Li, Shanyou; Lyu, Yuejun

    2018-01-01

    The November 22, 2014, Kangding strike-slip earthquake (Mw 6.0) occurred on the Southern Section of the Xianshuihe Fault Zone. Its epicenter was at 101.69°E, 30.26°N, source mechanism strikes N33°E, dips 82°, and slipped at an angle of -9°. In this work, we simulated ground motions by the stochastic finite-fault model(SFFM), including peak ground acceleration, peak velocity, and acceleration time-histories caused by this earthquake.

  12. Efficient sliding spotlight SAR raw signal simulation of extended scenes

    Directory of Open Access Journals (Sweden)

    Huang Pingping

    2011-01-01

    Full Text Available Abstract Sliding spotlight mode is a novel synthetic aperture radar (SAR imaging scheme with an achieved azimuth resolution better than stripmap mode and ground coverage larger than spotlight configuration. However, its raw signal simulation of extended scenes may not be efficiently implemented in the two-dimensional (2D Fourier transformed domain. This article presents a novel sliding spotlight raw signal simulation approach from the wide-beam SAR imaging modes. This approach can generate sliding spotlight raw signal not only from raw data evaluated by the simulators, but also from real data in the stripmap/spotlight mode. In order to obtain the desired raw data from conventional stripmap/spotlight mode, the azimuth time-varying filtering, which is implemented by de-rotation and low-pass filtering, is adopted. As raw signal of extended scenes in the stripmap/spotlight mode can efficiently be evaluated in the 2D Fourier domain, the proposed approach provides an efficient sliding spotlight SAR simulator of extended scenes. Simulation results validate this efficient simulator.

  13. Stochastic Ratcheting on a Funneled Energy Landscape Is Necessary for Highly Efficient Contractility of Actomyosin Force Dipoles

    Science.gov (United States)

    Komianos, James E.; Papoian, Garegin A.

    2018-04-01

    Current understanding of how contractility emerges in disordered actomyosin networks of nonmuscle cells is still largely based on the intuition derived from earlier works on muscle contractility. In addition, in disordered networks, passive cross-linkers have been hypothesized to percolate force chains in the network, hence, establishing large-scale connectivity between local contractile clusters. This view, however, largely overlooks the free energy of cross-linker binding at the microscale, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we use stochastic simulations and mean-field theory to shed light on the dynamics of a single actomyosin force dipole—a pair of antiparallel actin filaments interacting with active myosin II motors and passive cross-linkers. We first show that while passive cross-linking without motor activity can produce significant contraction between a pair of actin filaments, driven by thermodynamic favorability of cross-linker binding, a sharp onset of kinetic arrest exists at large cross-link binding energies, greatly diminishing the effectiveness of this contractility mechanism. Then, when considering an active force dipole containing nonmuscle myosin II, we find that cross-linkers can also serve as a structural ratchet when the motor dissociates stochastically from the actin filaments, resulting in significant force amplification when both molecules are present. Our results provide predictions of how actomyosin force dipoles behave at the molecular level with respect to filament boundary conditions, passive cross-linking, and motor activity, which can explicitly be tested using an optical trapping experiment.

  14. Stochastic Ratcheting on a Funneled Energy Landscape Is Necessary for Highly Efficient Contractility of Actomyosin Force Dipoles

    Directory of Open Access Journals (Sweden)

    James E. Komianos

    2018-04-01

    Full Text Available Current understanding of how contractility emerges in disordered actomyosin networks of nonmuscle cells is still largely based on the intuition derived from earlier works on muscle contractility. In addition, in disordered networks, passive cross-linkers have been hypothesized to percolate force chains in the network, hence, establishing large-scale connectivity between local contractile clusters. This view, however, largely overlooks the free energy of cross-linker binding at the microscale, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we use stochastic simulations and mean-field theory to shed light on the dynamics of a single actomyosin force dipole—a pair of antiparallel actin filaments interacting with active myosin II motors and passive cross-linkers. We first show that while passive cross-linking without motor activity can produce significant contraction between a pair of actin filaments, driven by thermodynamic favorability of cross-linker binding, a sharp onset of kinetic arrest exists at large cross-link binding energies, greatly diminishing the effectiveness of this contractility mechanism. Then, when considering an active force dipole containing nonmuscle myosin II, we find that cross-linkers can also serve as a structural ratchet when the motor dissociates stochastically from the actin filaments, resulting in significant force amplification when both molecules are present. Our results provide predictions of how actomyosin force dipoles behave at the molecular level with respect to filament boundary conditions, passive cross-linking, and motor activity, which can explicitly be tested using an optical trapping experiment.

  15. Why simulation can be efficient: on the preconditions of efficient learning in complex technology based practices

    Directory of Open Access Journals (Sweden)

    Hofmann Bjørn

    2009-07-01

    Full Text Available Abstract Background It is important to demonstrate learning outcomes of simulation in technology based practices, such as in advanced health care. Although many studies show skills improvement and self-reported change to practice, there are few studies demonstrating patient outcome and societal efficiency. The objective of the study is to investigate if and why simulation can be effective and efficient in a hi-tech health care setting. This is important in order to decide whether and how to design simulation scenarios and outcome studies. Methods Core theoretical insights in Science and Technology Studies (STS are applied to analyze the field of simulation in hi-tech health care education. In particular, a process-oriented framework where technology is characterized by its devices, methods and its organizational setting is applied. Results The analysis shows how advanced simulation can address core characteristics of technology beyond the knowledge of technology's functions. Simulation's ability to address skilful device handling as well as purposive aspects of technology provides a potential for effective and efficient learning. However, as technology is also constituted by organizational aspects, such as technology status, disease status, and resource constraints, the success of simulation depends on whether these aspects can be integrated in the simulation setting as well. This represents a challenge for future development of simulation and for demonstrating its effectiveness and efficiency. Conclusion Assessing the outcome of simulation in education in hi-tech health care settings is worthwhile if core characteristics of medical technology are addressed. This challenges the traditional technical versus non-technical divide in simulation, as organizational aspects appear to be part of technology's core characteristics.

  16. Stochastic Investigation of Natural Frequency for Functionally Graded Plates

    Science.gov (United States)

    Karsh, P. K.; Mukhopadhyay, T.; Dey, S.

    2018-03-01

    This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.

  17. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    Science.gov (United States)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  18. Recovering kinetics from a simplified protein folding model using replica exchange simulations: a kinetic network and effective stochastic dynamics.

    Science.gov (United States)

    Zheng, Weihua; Andrec, Michael; Gallicchio, Emilio; Levy, Ronald M

    2009-08-27

    We present an approach to recover kinetics from a simplified protein folding model at different temperatures using the combined power of replica exchange (RE), a kinetic network, and effective stochastic dynamics. While RE simulations generate a large set of discrete states with the correct thermodynamics, kinetic information is lost due to the random exchange of temperatures. We show how we can recover the kinetics of a 2D continuous potential with an entropic barrier by using RE-generated discrete states as nodes of a kinetic network. By choosing the neighbors and the microscopic rates between the neighbors appropriately, the correct kinetics of the system can be recovered by running a kinetic simulation on the network. We fine-tune the parameters of the network by comparison with the effective drift velocities and diffusion coefficients of the system determined from short-time stochastic trajectories. One of the advantages of the kinetic network model is that the network can be built on a high-dimensional discretized state space, which can consist of multiple paths not consistent with a single reaction coordinate.

  19. Workshop on quantum stochastic differential equations for the quantum simulation of physical systems

    Science.gov (United States)

    2016-09-22

    and a milestone as ARL becomes a significant player in the development of mathematical tools for quantum systems of interest to the Army...Commun. Math . Phys. 104, 457 (1986). [5] M.-H. Chang, Quantum stochastics, Cambridge Series in Statistical and Probabilistic Mathematics (2014...Phys. A 28, 5401 (1995). [9] M. J. Kastoryano and K. Temme, Quantum logarithmic Sobolev inequalities and rapid mixing, J. Math . Phys. 54, 052202

  20. Stochastic differential equations and a biological system

    DEFF Research Database (Denmark)

    Wang, Chunyan

    1994-01-01

    . The simulated results are compared with the experimental data, and it is found that the Euler method is the most simple end efficient method for the stochastic growth model considered. Estimation of the parameters of the growth model is based on the stochastic Kalman filter and a continuous Markov process......The purpose of this Ph.D. study is to explore the property of a growth process. The study includes solving and simulating of the growth process which is described in terms of stochastic differential equations. The identification of the growth and variability parameters of the process based...... been developed. Their properties and the relationship between them are discussed. The evolution of a dynamic system or process is usually of great practical interest. In order to simulate the evolution of the process, alternative methods are used to get numerical solutions. In this study, Euler...

  1. Determining the energy performance of manually controlled solar shades: A stochastic model based co-simulation analysis

    International Nuclear Information System (INIS)

    Yao, Jian

    2014-01-01

    Highlights: • Driving factor for adjustment of manually controlled solar shades was determined. • A stochastic model for manual solar shades was constructed using Markov method. • Co-simulation with Energyplus was carried out in BCVTB. • External shading even manually controlled should be used prior to LOW-E windows. • Previous studies on manual solar shades may overestimate energy savings. - Abstract: Solar shading devices play a significant role in reducing building energy consumption and maintaining a comfortable indoor condition. In this paper, a typical office building with internal roller shades in hot summer and cold winter zone was selected to determine the driving factor of control behavior of manual solar shades. Solar radiation was determined as the major factor in driving solar shading adjustment based on field measurements and logit analysis and then a stochastic model for manually adjusted solar shades was constructed by using Markov method. This model was used in BCVTB for further co-simulation with Energyplus to determine the impact of the control behavior of solar shades on energy performance. The results show that manually adjusted solar shades, whatever located inside or outside, have a relatively high energy saving performance than clear-pane windows while only external shades perform better than regularly used LOW-E windows. Simulation also indicates that using an ideal assumption of solar shade adjustment as most studies do in building simulation may lead to an overestimation of energy saving by about 16–30%. There is a need to improve occupants’ actions on shades to more effectively respond to outdoor conditions in order to lower energy consumption, and this improvement can be easily achieved by using simple strategies as a guide to control manual solar shades

  2. Searching for stable Si(n)C(n) clusters: combination of stochastic potential surface search and pseudopotential plane-wave Car-Parinello simulated annealing simulations.

    Science.gov (United States)

    Duan, Xiaofeng F; Burggraf, Larry W; Huang, Lingyu

    2013-07-22

    To find low energy Si(n)C(n) structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA). We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each Si(n)C(n) cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to Si(n)C(n) (n = 4-12) clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each Si(n)C(n) cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  3. Searching for Stable SinCn Clusters: Combination of Stochastic Potential Surface Search and Pseudopotential Plane-Wave Car-Parinello Simulated Annealing Simulations

    Directory of Open Access Journals (Sweden)

    Larry W. Burggraf

    2013-07-01

    Full Text Available To find low energy SinCn structures out of hundreds to thousands of isomers we have developed a general method to search for stable isomeric structures that combines Stochastic Potential Surface Search and Pseudopotential Plane-Wave Density Functional Theory Car-Parinello Molecular Dynamics simulated annealing (PSPW-CPMD-SA. We enhanced the Sunders stochastic search method to generate random cluster structures used as seed structures for PSPW-CPMD-SA simulations. This method ensures that each SA simulation samples a different potential surface region to find the regional minimum structure. By iterations of this automated, parallel process on a high performance computer we located hundreds to more than a thousand stable isomers for each SinCn cluster. Among these, five to 10 of the lowest energy isomers were further optimized using B3LYP/cc-pVTZ method. We applied this method to SinCn (n = 4–12 clusters and found the lowest energy structures, most not previously reported. By analyzing the bonding patterns of low energy structures of each SinCn cluster, we observed that carbon segregations tend to form condensed conjugated rings while Si connects to unsaturated bonds at the periphery of the carbon segregation as single atoms or clusters when n is small and when n is large a silicon network spans over the carbon segregation region.

  4. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  5. SIMULATIONS OF BOOSTER INJECTION EFFICIENCY FOR THE APS-UPGRADE

    Energy Technology Data Exchange (ETDEWEB)

    Calvey, J.; Borland, M.; Harkay, K.; Lindberg, R.; Yao, C.-Y.

    2017-06-25

    The APS-Upgrade will require the injector chain to provide high single bunch charge for swap-out injection. One possible limiting factor to achieving this is an observed reduction of injection efficiency into the booster synchrotron at high charge. We have simulated booster injection using the particle tracking code elegant, including a model for the booster impedance and beam loading in the RF cavities. The simulations point to two possible causes for reduced efficiency: energy oscillations leading to losses at high dispersion locations, and a vertical beam size blowup caused by ions in the Particle Accumulator Ring. We also show that the efficiency is much higher in an alternate booster lattice with smaller vertical beta function and zero dispersion in the straight sections.

  6. Extreme flood estimation for a large catchment by coordinated stochastic rainfall-runoff simulations of its main tributaries

    Science.gov (United States)

    Paquet, Emmanuel

    2014-05-01

    The SCHADEX method for extreme flood estimation, proposed by Paquet et al. (2013), is a so-called "semi-continuous" stochastic simulation method in that flood events are simulated on an event basis and are superimposed on a continuous simulation of the catchment saturation hazard using rainfall-runoff modeling. A complete CDF of daily discharges and flood peak values is build up to extreme quantiles with several millions of simulated events. The application of SCHADEX to a large catchment (area greater than 5000 km²) or whose floods are affected by significant hydraulic effects (flood plains, artificial reservoir) can pose a problem with some of the hypothesis of the method (e.g. hydro-climatic homogeneity within the catchment, no significant hydraulic damping of flood peaks). To overcome this limitation, a coordinated stochastic simulation method is proposed. Firstly, several tributaries of the large catchment are selected based on their hydro-climatic features (among them homogeneity) and the availability of data. Then the main SCHADEX components are set up for these catchments (hydrological model, probabilistic model for extreme rainfall, and peak-to-volume ratio), as well as for the large one. The SCHADEX stochastic simulation is ran for the large catchment. Each randomly drawn precipitation event (at the large ctachment's scale) is disaggregated to the catchment of each tributary thanks to the observed rain field shape of an historical day of similar synoptic situation. The rain fields are reconstructed at a 1 km² resolution for the whole area thanks to the SPAZM method (Gottardi, 2012). The synoptic situations are characterized thanks to a rainfall-oriented classification (Garavaglia, 2010). For a given precipitation event, all the catchment's saturations (and snowpack conditions) are kept synchronous by superimposing the simulated event on the conditions of the same day for all catchments. Several millions of flood events are simulated this way. At the end

  7. Improving the efficiency of the cardiac catheterization laboratories through understanding the stochastic behavior of the scheduled procedures.

    Science.gov (United States)

    Stepaniak, Pieter S; Soliman Hamad, Mohamed A; Dekker, Lukas R C; Koolen, Jacques J

    2014-01-01

    In this study, we sought to analyze the stochastic behavior of Catherization Laboratories (Cath Labs) procedures in our institution. Statistical models may help to improve estimated case durations to support management in the cost-effective use of expensive surgical resources. We retrospectively analyzed all the procedures performed in the Cath Labs in 2012. The duration of procedures is strictly positive (larger than zero) and has mostly a large minimum duration. Because of the strictly positive character of the Cath Lab procedures, a fit of a lognormal model may be desirable. Having a minimum duration requires an estimate of the threshold (shift) parameter of the lognormal model. Therefore, the 3-parameter lognormal model is interesting. To avoid heterogeneous groups of observations, we tested every group-cardiologist-procedure combination for the normal, 2- and 3-parameter lognormal distribution. The total number of elective and emergency procedures performed was 6,393 (8,186 h). The final analysis included 6,135 procedures (7,779 h). Electrophysiology (intervention) procedures fit the 3-parameter lognormal model 86.1% (80.1%). Using Friedman test statistics, we conclude that the 3-parameter lognormal model is superior to the 2-parameter lognormal model. Furthermore, the 2-parameter lognormal is superior to the normal model. Cath Lab procedures are well-modelled by lognormal models. This information helps to improve and to refine Cath Lab schedules and hence their efficient use.

  8. Synthetic Ground-Motion Simulation Using a Spatial Stochastic Model with Slip Self-Similarity: Toward Near-Source Ground-Motion Validation

    Directory of Open Access Journals (Sweden)

    Ya-Ting Lee

    2016-06-01

    Full Text Available Near-fault ground motion is a key to understanding the seismic hazard along a fault and is challenged by the ground motion prediction equation approach. This paper presents a developed stochastic-slip-scaling source model, a spatial stochastic model with slipped area scaling toward the ground motion simulation. We considered the near-fault ground motion of the 1999 Chi-Chi earthquake in Taiwan, the most massive near-fault disastrous earthquake, proposed by Ma et al. (2001 as a reference for validation. Three scenario source models including the developed stochastic-slip-scaling source model, mean-slip model and characteristic-asperity model were used for the near-fault ground motion examination. We simulated synthetic ground motion through 3D waveforms and validated these simulations using observed data and the ground-motion prediction equation (GMPE for Taiwan earthquakes. The mean slip and characteristic asperity scenario source models over-predicted the near-fault ground motion. The stochastic-slip-scaling model proposed in this paper is more accurately approximated to the near-fault motion compared with the GMPE and observations. This is the first study to incorporate slipped-area scaling in a stochastic slip model. The proposed model can generate scenario earthquakes for predicting ground motion.

  9. An efficiency improvement in warehouse operation using simulation analysis

    Science.gov (United States)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  10. Technical efficiency of rural primary health care system for diabetes treatment in Iran: a stochastic frontier analysis.

    Science.gov (United States)

    Qorbani, Mostafa; Farzadfar, Farshad; Majdzadeh, Reza; Mohammad, Kazem; Motevalian, Abbas

    2017-01-01

    Our aim was to explore the technical efficiency (TE) of the Iranian rural primary healthcare (PHC) system for diabetes treatment coverage rate using the stochastic frontier analysis (SFA) as well as to examine the strength and significance of the effect of human resources density on diabetes treatment. In the SFA model diabetes treatment coverage rate, as a output, is a function of health system inputs (Behvarz worker density, physician density, and rural health center density) and non-health system inputs (urbanization rate, median age of population, and wealth index) as a set of covariates. Data about the rate of self-reported diabetes treatment coverage was obtained from the Non-Communicable Disease Surveillance Survey, data about health system inputs were collected from the health census database and data about non-health system inputs were collected from the census data and household survey. In 2008, rate of diabetes treatment coverage was 67% (95% CI: 63%-71%) nationally, and at the provincial level it varied from 44% to 81%. The TE score at the national level was 87.84%, with considerable variation across provinces (from 59.65% to 98.28%).Among health system and non-health system inputs, only the Behvarz density (per 1000 population)was significantly associated with diabetes treatment coverage (β (95%CI): 0.50 (0.29-0.70), p  < 0.001). Our findings show that although the rural PHC system can considered efficient in diabetes treatment at the national level, a wide variation exists in TE at the provincial level. Because the only variable that is predictor of TE is the Behvarz density, the PHC system may extend the diabetes treatment coverage by using this group of health care workers.

  11. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    Science.gov (United States)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  12. Optimization instances for deterministic and stochastic problems on energy efficient investments planning at the building level.

    Science.gov (United States)

    Cano, Emilio L; Moguerza, Javier M; Alonso-Ayuso, Antonio

    2015-12-01

    Optimization instances relate to the input and output data stemming from optimization problems in general. Typically, an optimization problem consists of an objective function to be optimized (either minimized or maximized) and a set of constraints. Thus, objective and constraints are jointly a set of equations in the optimization model. Such equations are a combination of decision variables and known parameters, which are usually related to a set domain. When this combination is a linear combination, we are facing a classical Linear Programming (LP) problem. An optimization instance is related to an optimization model. We refer to that model as the Symbolic Model Specification (SMS) containing all the sets, variables, and parameters symbols and relations. Thus, a whole instance is composed by the SMS, the elements in each set, the data values for all the parameters, and, eventually, the optimal decisions resulting from the optimization solution. This data article contains several optimization instances from a real-world optimization problem relating to investment planning on energy efficient technologies at the building level.

  13. Stochastic quantization and supersymmetry

    International Nuclear Information System (INIS)

    Kirschner, R.

    1984-04-01

    In the last years interest in stochastic quantization has increased. The method of quantization by stochastic relaxation processes has been proposed by Parisi and Wu, inspired by the extensive application of Monte Carlo simulations to quantum systems. Starting with the classical equations of motion of the system (field theory) and adding random force terms - the random force obeys a Gaussian distribution (white noise) - stochastic differential equations are obtained, in this context called Langevin equations, which are a central object in the theory of stochastic processes. (author)

  14. Estimating oil pollution risk in environmentally sensitive areas of petrochemical terminals based on a stochastic numerical simulation.

    Science.gov (United States)

    Xie, Cheng; Deng, Jian; Zhuang, Yuan; Sun, Hao

    2017-10-15

    This paper presents a method based on the oceanic current model and the oil spill model to evaluate the pollution risk of sensitive resources when oil spills occur. Moreover, this study proposes a novel impact index based on the risk theory to improve the risk assessment accuracy. The impact probability and the first impact time of the oil spill are calculated through a stochastic numerical simulation. The risk assessment content is enriched by establishing an impact model that considers the impact of sensitive index and spillage. Finally, the risk score of sensitive resources in an oil spill accident is visualized for formulating a scientific and effective protection priority order in a contamination response strategy. This study focuses on integrating every possible impact factor that plays a role in risk assessment and helps to provide a better theoretical support for protecting sensitive resources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Building Performance Simulation tools for planning of energy efficiency retrofits

    DEFF Research Database (Denmark)

    Mondrup, Thomas Fænø; Karlshøj, Jan; Vestergaard, Flemming

    2014-01-01

    Designing energy efficiency retrofits for existing buildings will bring environmental, economic, social, and health benefits. However, selecting specific retrofit strategies is complex and requires careful planning. In this study, we describe a methodology for adopting Building Performance...... to energy efficiency retrofits in social housing. To generate energy savings, we focus on optimizing the building envelope. We evaluate alternative building envelope actions using procedural solar radiation and daylight simulations. In addition, we identify the digital information flow and the information...... Simulation (BPS) tools as energy and environmentally conscious decision-making aids. The methodology has been developed to screen buildings for potential improvements and to support the development of retrofit strategies. We present a case study of a Danish renovation project, implementing BPS approaches...

  16. Measuring the Technical Efficiency of Farms Producing Environmental Output: Parametric and Semiparametric Estimation of Multi-output Stochastic Ray Production Frontiers

    OpenAIRE

    Tomasz Gerard Czekaj

    2013-01-01

    This paper investigates the technical efficiency of Polish dairy farms producing environmental output using the stochastic ray function to model multi-output – multi-input technology. Two general models are considered. One which neglects the provision of environmental output and one which accounts for such output. Three different proxies of environmental output are discussed: the ratio of permanent grassland (including rough grazing) to total agricultural area, the total area of permanent gra...

  17. Environmental Barrier Coating Fracture, Fatigue and High-Heat-Flux Durability Modeling and Stochastic Progressive Damage Simulation

    Science.gov (United States)

    Zhu, Dongming; Nemeth, Noel N.

    2017-01-01

    Advanced environmental barrier coatings will play an increasingly important role in future gas turbine engines because of their ability to protect emerging light-weight SiC/SiC ceramic matrix composite (CMC) engine components, further raising engine operating temperatures and performance. Because the environmental barrier coating systems are critical to the performance, reliability and durability of these hot-section ceramic engine components, a prime-reliant coating system along with established life design methodology are required for the hot-section ceramic component insertion into engine service. In this paper, we have first summarized some observations of high temperature, high-heat-flux environmental degradation and failure mechanisms of environmental barrier coating systems in laboratory simulated engine environment tests. In particular, the coating surface cracking morphologies and associated subsequent delamination mechanisms under the engine level high-heat-flux, combustion steam, and mechanical creep and fatigue loading conditions will be discussed. The EBC compostion and archtechture improvements based on advanced high heat flux environmental testing, and the modeling advances based on the integrated Finite Element Analysis Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program will also be highlighted. The stochastic progressive damage simulation successfully predicts mud flat damage pattern in EBCs on coated 3-D specimens, and a 2-D model of through-the-thickness cross-section. A 2-parameter Weibull distribution was assumed in characterizing the coating layer stochastic strength response and the formation of damage was therefore modeled. The damage initiation and coalescence into progressively smaller mudflat crack cells was demonstrated. A coating life prediction framework may be realized by examining the surface crack initiation and delamination propagation in conjunction with environmental

  18. Deorbit efficiency assessment through numerical simulation of electromagnetic tether devices

    Directory of Open Access Journals (Sweden)

    Alexandru IONEL

    2016-03-01

    Full Text Available This paper examines the deorbit efficiency of an electromagnetic tether deorbit device when used to deorbit an upper stage at end of mission from low Earth orbit. This is done via a numerical simulation in Matlab R2013a, using ode45, taking into account perturbations on the upper stage’s trajectory. The perturbations taken into account are the atmospheric drag, the 3rd body (Sun and Moon, and Earth’s gravitational potential expanded into spherical harmonics.

  19. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  20. Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Kevrekidis, Ioannis [Princeton Univ., NJ (United States)

    2017-03-22

    The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating the scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).

  1. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  2. A stochastic model for the simulation of management decisions in dairy herds, with special reference to production, reproduction, culling and income

    NARCIS (Netherlands)

    Dijkhuizen, A.A.; Stelwagen, J.; Renkema, J.A.

    1986-01-01

    A stochastic simulation model was developed to study management decisions in dairy herds. The primary purpose of this model is to quantify the economic effects of different culling policies with respect to productive and reproductive failure. Each stimulated herd consists of a fixed number of up

  3. Efficient Control Law Simulation for Multiple Mobile Robots

    Energy Technology Data Exchange (ETDEWEB)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.; Kwok, K.S.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The time to calculate the control law for each robot at each time step is demonstrated to be O(logN).

  4. Stochastic simulation of pitting degradation of multi-barrier waste container in the potential repository at Yucca Mountain

    International Nuclear Information System (INIS)

    Lee, J.H.; Atkins, J.E.; Andrews, R.W.

    1995-01-01

    A detailed stochastic waste package degradation simulation model was developed incorporating the humid-air and aqueous general and pitting corrosion models for the carbon steel corrosion-allowance outer barrier and aqueous pitting corrosion model for the Alloy 825 corrosion-resistant inner barrier. The uncertainties in the individual corrosion models were also incorporated to capture the variability in the corrosion degradation among waste packages and among pits in the same waste package. Within the scope of assumptions employed in the simulations, the corrosion modes considered, and the near-field conditions from the drift-scale thermohydrologic model, the results of the waste package performance analyses show that the current waste package design appears to meet the 'controlled design assumption' requirement of waste package performance, which is currently defined as having less than 1% of waste packages breached at 1,000 years. It was shown that, except for the waste packages that fail early, pitting corrosion of the corrosion-resistant inner barrier has a greater control on the failure of waste packages and their subsequent degradation than the outer barrier. Further improvement and substantiation of the inner barrier pitting model (currently based on an elicitation) is necessary in future waste package performance simulation model

  5. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.

  6. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  7. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  8. Gompertzian stochastic model with delay effect to cervical cancer growth

    International Nuclear Information System (INIS)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah

    2015-01-01

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits

  9. Gompertzian stochastic model with delay effect to cervical cancer growth

    Energy Technology Data Exchange (ETDEWEB)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor and UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2015-02-03

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.

  10. Efficient electron open boundaries for simulating electrochemical cells

    Science.gov (United States)

    Zauchner, Mario G.; Horsfield, Andrew P.; Todorov, Tchavdar N.

    2018-01-01

    Nonequilibrium electrochemistry raises new challenges for atomistic simulation: we need to perform molecular dynamics for the nuclear degrees of freedom with an explicit description of the electrons, which in turn must be free to enter and leave the computational cell. Here we present a limiting form for electron open boundaries that we expect to apply when the magnitude of the electric current is determined by the drift and diffusion of ions in a solution and which is sufficiently computationally efficient to be used with molecular dynamics. We present tight-binding simulations of a parallel-plate capacitor with nothing, a dimer, or an atomic wire situated in the space between the plates. These simulations demonstrate that this scheme can be used to perform molecular dynamics simulations when there is an applied bias between two metal plates with, at most, weak electronic coupling between them. This simple system captures some of the essential features of an electrochemical cell, suggesting this approach might be suitable for simulations of electrochemical cells out of equilibrium.

  11. An efficient simulator for pinhole imaging of PET isotopes.

    Science.gov (United States)

    Goorden, M C; van der Have, F; Kreuger, R; Beekman, F J

    2011-03-21

    Today, small-animal multi-pinhole single photon emission computed tomography (SPECT) can reach sub-half-millimeter image resolution. Recently we have shown that dedicated multi-pinhole collimators can also image PET tracers at sub-mm level. Simulations play a vital role in the design and optimization of such collimators. Here we propose and validate an efficient simulator that models the whole imaging chain from emitted positron to detector signal. This analytical simulator for pinhole positron emission computed tomography (ASPECT) combines analytical models for pinhole and detector response with Monte Carlo (MC)-generated kernels for positron range. Accuracy of ASPECT was validated by means of a MC simulator (MCS) that uses a kernel-based step for detector response with an angle-dependent detector kernel based on experiments. Digital phantom simulations with ASPECT and MCS converge to almost identical images. However, ASPECT converges to an equal image noise level three to four orders of magnitude faster than MCS. We conclude that ASPECT could serve as a practical tool in collimator design and iterative image reconstruction for novel multi-pinhole PET.

  12. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  13. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  14. Assembly Line Efficiency Improvement by Using WITNESS Simulation Software

    Science.gov (United States)

    Yasir, A. S. H. M.; Mohamed, N. M. Z. N.

    2018-03-01

    In the nowadays-competitive world, efficiencies and the productivity of the assembly line are essential in manufacturing company. This paper demonstrates the study of the existing production line performance. The actual cycle time observed and recorded during the working process. The current layout was designed and analysed using Witness simulation software. The productivity and effectiveness for every single operator are measured to determine the operator idle time and busy time. Two new alternatives layout were proposed and analysed by using Witness simulation software to improve the performance of production activities. This research provided valuable and better understanding of production effectiveness by adjusting the line balancing. After analysing the data, simulation result from the current layout and the proposed plan later been tabulated to compare the improved efficiency and productivity. The proposed design plan has shown an increase in yield and productivity compared to the current arrangement. This research has been carried out in company XYZ, which is one of the automotive premises in Pahang, Malaysia.

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  16. Stochastic Systems Uncertainty Quantification and Propagation

    CERN Document Server

    Grigoriu, Mircea

    2012-01-01

    Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: ·         A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis   ·          Probabilistic models for random variables an...

  17. Evaluation of traffic signal timing optimization methods using a stochastic and microscopic simulation program.

    Science.gov (United States)

    2003-01-01

    This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...

  18. On the interpretation of double-packer tests in heterogeneous porous media: Numerical simulations using the stochastic continuum analogue

    International Nuclear Information System (INIS)

    Follin, S.

    1992-12-01

    Flow in fractured crystalline (hard) rocks is of interest in Sweden for assessing the postclosure radiological safety of a deep repository for high-level nuclear waste. For simulation of flow and mass transport in the far field different porous media concepts are often used, whereas discrete fracture/channel network concepts are often used for near-field simulations. Due to lack of data, it is generally necessary to have resort to single-hole double-packer test data for the far-field simulations, i.e., test data on a small scale are regularized in order to fit a comparatively coarser numerical discretization, which is governed by various computational constraints. In the present study the Monte Carlo method is used to investigate the relationship between the transmissivity value interpreted and the corresponding radius of influence in conjunction with single-hole double-packer tests in heterogeneous formations. The numerical flow domain is treated as a two-dimensional heterogeneous porous medium with a spatially varying diffusivity on 3 m scale. The Monte Carlo simulations demonstrate the sensitivity to the correlation range of a spatially varying diffusivity field. In contradiction to what is tacitly assumed in stochastic subsurface hydrology, the results show that the lateral support scale (e.g., the radius of influence) of transmissivity measurements in heterogeneous porous media is a random variable, which is affected by both the hydraulic and statistical characteristics. If these results are general, the traditional methods for scaling-up, assuming a constant lateral scale of support and a multi normal distribution, may lead to an underestimation of the persistence and connectivity of transmissive zones, particularly in highly heterogeneous porous media

  19. Evaluation of Monte Carlo electron-Transport algorithms in the integrated Tiger series codes for stochastic-media simulations

    International Nuclear Information System (INIS)

    Franke, B.C.; Kensek, R.P.; Prinja, A.K.

    2013-01-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative 'condensed transport' formulation, a Generalized Boltzmann-Fokker-Planck (GBFP) method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations. (authors)

  20. A stochastic model for neutron simulation considering the spectrum and nuclear properties with continuous dependence of energy

    International Nuclear Information System (INIS)

    Camargo, Dayana Queiroz de

    2011-01-01

    This thesis has developed a stochastic model to simulate the neutrons transport in a heterogeneous environment, considering continuous neutron spectra and the nuclear properties with its continuous dependence on energy. This model was implemented using Monte Carlo method for the propagation of neutrons in different environment. Due to restrictions with respect to the number of neutrons that can be simulated in reasonable computational processing time introduced the variable control volume along the (pseudo-) periodic boundary conditions in order to overcome this problem. The choice of class physical Monte Carlo is due to the fact that it can decompose into simpler constituents the problem of solve a transport equation. The components may be treated separately, these are the propagation and interaction while respecting the laws of energy conservation and momentum, and the relationships that determine the probability of their interaction. We are aware of the fact that the problem approached in this thesis is far from being comparable to building a nuclear reactor, but this discussion the main target was to develop the Monte Carlo model, implement the code in a computer language that allows extensions of modular way. This study allowed a detailed analysis of the influence of energy on the neutron population and its impact on the life cycle of neutrons. From the results, even for a simple geometrical arrangement, we can conclude the need to consider the energy dependence, i.e. an spectral effective multiplication factor should be introduced each energy group separately. (author)

  1. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  2. Adding computationally efficient realism to Monte Carlo turbulence simulation

    Science.gov (United States)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  3. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... to be efficient as the tail probability of interest decreases to zero. The first estimator, based on importance sampling, involves a scaling of the whole covariance matrix and can be shown to be asymptotically optimal. A further study, based on the Cross-Entropy algorithm, is also performed in order to adaptively...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  4. Simulating the market for automotive fuel efficiency: The SHRSIM model

    Energy Technology Data Exchange (ETDEWEB)

    Greene, D.L.

    1987-02-01

    This report describes a computer model for simulating the effects of uncertainty about future fuel prices and competitors' behavior on the market shares of an automobile manufacturer who is considering introducing technology to increase fuel efficiency. Starting with an initial sales distribution, a pivot-point multinomial logit technique is used to adjust market shares based on changes in the present value of the added fuel efficiency. These shifts are random because the model generates random fuel price projections using parameters supplied by the user. The user also controls the timing of introduction and obsolescence of technology. While the model was designed with automobiles in mind, it has more general applicability to energy using durable goods. The model is written in IBM BASIC for an IBM PC and compiled using the Microsoft QuickBASIC (trademark of the Microsoft corporation) compiler.

  5. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    Science.gov (United States)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  6. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  7. REHEARSAL Using Patient-Specific Simulation to Improve Endovascular Efficiency.

    Science.gov (United States)

    Wooster, Mathew; Doyle, Adam; Hislop, Sean; Glocker, Roan; Armstrong, Paul; Singh, Michael; Illig, Karl A

    2018-04-01

    To determine whether rehearsal using patient-specific information loaded onto an endovascular simulator prior to carotid stenting improves procedural efficiency and outcomes. Patients scheduled for carotid artery stenting who had adequate preoperative computed tomography (CT) imaging were considered for enrollment. After obtaining informed consent, patients were randomized to control versus rehearsal groups. Those in the rehearsal group had their CT scans loaded into an endovascular simulator (Angio Mentor) followed by case rehearsal by the attending on the simulator within 24 hours prior to the procedure; control patients underwent routine carotid stenting without rehearsal. Contrast usage, fluoroscopy time, and timing of procedural steps were recorded by a blinded observer during the actual case to determine benefit. Fifteen patients were enrolled, with 6 patients randomized to the rehearsal group and 9 to the control. All measures showed improvement in the rehearsal group: Mean contrast volume (59.2 vs 76.9 mL), fluoroscopy time (11.4 vs 19.4 minutes), overall operative time (31.9 vs 42.5 minutes), time to common carotid sheath placement (17.0 vs 23.3 minutes), and total carotid sheath dwell time (14.9 vs 19.2 minutes) were all lower (more favorable) in the rehearsal group. The study was terminated early due to the lack of simulator access, and all P values were thus greater than .05 due to the lack of power. No strokes or other adverse events occurred in either group. Case-specific simulator rehearsal using patient-specific imaging prior to carotid stenting is associated with numerically less contrast usage, operative time, and radiation exposure, although this study was underpowered.

  8. Using remotely sensed data and stochastic models to simulate realistic flood hazard footprints across the continental US

    Science.gov (United States)

    Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.

    2017-12-01

    Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in

  9. A stochastic model for the simulation of wind turbine blades in static stall

    DEFF Research Database (Denmark)

    Bertagnolio, Franck; Rasmussen, Flemming; Sørensen, Niels N.

    2010-01-01

    The aim of this work is to improve aeroelastic simulation codes by accounting for the unsteady aerodynamic forces that a blade experiences in static stall. A model based on a spectral representation of the aerodynamic lift force is defined. The drag and pitching moment are derived using...

  10. Stochastic estimation and simulation of heterogeneities important for transport of contaminants in the unsaturated zone

    Energy Technology Data Exchange (ETDEWEB)

    Kitteroed, Nils-Otto

    1997-12-31

    The background for this thesis was the increasing risk of contamination of water resources and the requirement of groundwater protection. Specifically, the thesis implements procedures to estimate and simulate observed heterogeneities in the unsaturated zone and evaluates what impact the heterogeneities may have on the water flow. The broad goal was to establish a reference model with high spatial resolution within a small area and to condition the model using spatially frequent field observations, and the Moreppen site at Oslo`s new major airport was used for this purpose. An approach is presented for the use of ground penetrating radar in which indicator kriging is used to estimate continuous stratigraphical architecture. Kriging is also used to obtain 3D images of soil moisture. A simulation algorithm based on the Karhunen-Loeve expansion is evaluated and a modification of the Karhunen-Loeve simulation is suggested that makes it possible to increase the size of the simulation lattice. This is obtained by kriging interpolation of the eigenfunctions. 250 refs., 40 figs., 7 tabs.

  11. Stochastic simulation of large grids using free and public domain software

    NARCIS (Netherlands)

    Bruin, de S.; Wit, de A.J.W.

    2005-01-01

    This paper proposes a tiled map procedure enabling sequential indicator simulation on grids consisting of several tens of millions of cells, without putting excessive memory requirements. Spatial continuity across map tiles is handled by conditioning adjacent tiles on their shared boundaries. Tiles

  12. Modeling ion channel dynamics through reflected stochastic differential equations.

    Science.gov (United States)

    Dangerfield, Ciara E; Kay, David; Burrage, Kevin

    2012-05-01

    Ion channels are membrane proteins that open and close at random and play a vital role in the electrical dynamics of excitable cells. The stochastic nature of the conformational changes these proteins undergo can be significant, however current stochastic modeling methodologies limit the ability to study such systems. Discrete-state Markov chain models are seen as the "gold standard," but are computationally intensive, restricting investigation of stochastic effects to the single-cell level. Continuous stochastic methods that use stochastic differential equations (SDEs) to model the system are more efficient but can lead to simulations that have no biological meaning. In this paper we show that modeling the behavior of ion channel dynamics by a reflected SDE ensures biologically realistic simulations, and we argue that this model follows from the continuous approximation of the discrete-state Markov chain model. Open channel and action potential statistics from simulations of ion channel dynamics using the reflected SDE are compared with those of a discrete-state Markov chain method. Results show that the reflected SDE simulations are in good agreement with the discrete-state approach. The reflected SDE model therefore provides a computationally efficient method to simulate ion channel dynamics while preserving the distributional properties of the discrete-state Markov chain model and also ensuring biologically realistic solutions. This framework could easily be extended to other biochemical reaction networks.

  13. A stochastic frontier analysis of technical efficiency in smallholder maize production in Zimbabwe: The post-fast-track land reform outlook

    Directory of Open Access Journals (Sweden)

    Nelson Mango

    2015-12-01

    Full Text Available This article analyses the technical efficiency of maize production in Zimbabwe’s smallholder farming communities following the fast-track land reform of the year 2000 with a view of highlighting key entry points for policy. Using a randomly selected sample of 522 smallholder maize producers, a stochastic frontier production model was applied, using a linearised Cobb–Douglas production function to determine the production elasticity coefficients of inputs, technical efficiency and the determinants of efficiency. The study finds that maize output responds positively to increases in inorganic fertilisers, seed quantity, the use of labour and the area planted. The technical efficiency analysis suggests that about 90% of farmers in the sample are between 60 and 75% efficient, with an average efficiency in the sample of 65%. The significant determinants of technical efficiency were the gender of the household head, household size, frequency of extension visits, farm size and the farming region. The results imply that the average efficiency of maize production could be improved by 35% through better use of existing resources and technology. The results highlight the need for government and private sector assistance in improving efficiency by promoting access to productive resources and ensuring better and more reliable agricultural extension services.

  14. A new efficient approach to fit stochastic models on the basis of high-throughput experimental data using a model of IRF7 gene expression as case study.

    Science.gov (United States)

    Aguilera, Luis U; Zimmer, Christoph; Kummer, Ursula

    2017-02-20

    Mathematical models are used to gain an integrative understanding of biochemical processes and networks. Commonly the models are based on deterministic ordinary differential equations. When molecular counts are low, stochastic formalisms like Monte Carlo simulations are more appropriate and well established. However, compared to the wealth of computational methods used to fit and analyze deterministic models, there is only little available to quantify the exactness of the fit of stochastic models compared to experimental data or to analyze different aspects of the modeling results. Here, we developed a method to fit stochastic simulations to experimental high-throughput data, meaning data that exhibits distributions. The method uses a comparison of the probability density functions that are computed based on Monte Carlo simulations and the experimental data. Multiple parameter values are iteratively evaluated using optimization routines. The method improves its performance by selecting parameters values after comparing the similitude between the deterministic stability of the system and the modes in the experimental data distribution. As a case study we fitted a model of the IRF7 gene expression circuit to time-course experimental data obtained by flow cytometry. IRF7 shows bimodal dynamics upon IFN stimulation. This dynamics occurs due to the switching between active and basal states of the IRF7 promoter. However, the exact molecular mechanisms responsible for the bimodality of IRF7 is not fully understood. Our results allow us to conclude that the activation of the IRF7 promoter by the combination of IRF7 and ISGF3 is sufficient to explain the observed bimodal dynamics.

  15. Study of two tantalum Taylor impact specimens using experiments and stochastic polycrystal plasticity simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tonks, Michael R [Los Alamos National Laboratory

    2008-01-01

    We compare the experimentally obtained response of two cylindrical tantalum Taylor impact specimens. The first specimen is manufactured using a powder metallurgy (P/M) process with a random initial texture and relatively equiaxed crystals. The second is sectioned from a roundcorner square rolled (RCSR) rod with an asymmetric texture and elongated crystals. The deformed P/M specimen has an axisymmetric footprint while the deformed RCSR projectile has an eccentric footprint with distinct corners. Also, the two specimens experienced similar crystallographic texture evolution, though the RCSR specimen experienced greater plastic deformation. Our simulation predictions mimic the texture and deformation data measured from the P/M specimen. However, our RCSR specimen simulations over-predict the texture development and do not accurately predict the deformation, though the deformation prediction is improved when the texture is not allowed to evolve. We attribute this discrepancy to the elongated crystal morphology in the RCSR specimen which is not represented in our mean-field model.

  16. Comparative performance of different stochastic methods to simulate drug exposure and variability in a population.

    Science.gov (United States)

    Tam, Vincent H; Kabbara, Samer

    2006-10-01

    Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.

  17. Stochastic simulation of patterns using ISOMAP for dimensionality reduction of training images

    Science.gov (United States)

    Zhang, Ting; Du, Yi; Huang, Tao; Yang, Jiaqing; Li, Xue

    2015-06-01

    Most data in the real world are normally nonlinear or difficult to determine whether they are linear or not beforehand. Some linear dimensionality reduction algorithms, e.g., principal component analysis (PCA) and multi-dimensional scaling (MDS) are only suitable for linear dimensionality reduction of spatial data. The patterns extracted from training images (TIs) used in MPS simulation mostly are probably nonlinear, so for some MPS simulation methods based on dimensionality reduction, e.g., FILTERSIM using some filters created via the idea of PCA and DisPAT using MDS as a tool of dimensionality reduction, those linear methods for dimensionality reduction are not appropriate when realizing the dimensionality reduction of nonlinear data of patterns. Therefore, isometric mapping (ISOMAP) working as a nonlinear dimensionality reduction method used in manifold learning is introduced to map those patterns, regardless of being linear or nonlinear, into low-dimensional space. However, because the original ISOMAP has some disadvantages in computing speed and accuracy, landmark points of patterns are selected to improve the speed and neighborhoods of patterns are set to guarantee the quality of dimensionality reduction. Next, the sequential simulation similar to FILTERSIM is performed after low-dimensional data of patterns are classified by a density-based clustering algorithm. The comparisons with FILTERSIM and DisPAT show the improvement of pattern reproductivity and computing speed of our method for both continuous and categorical variables.

  18. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  19. Simulating extreme low-discharge events for the Rhine using a stochastic model

    Science.gov (United States)

    Macian-Sorribes, Hector; Mens, Marjolein; Schasfoort, Femke; Diermanse, Ferdinand; Pulido-Velazquez, Manuel

    2017-04-01

    The specific features of hydrological droughts make them more difficult to be analysed than other water-related phenomena: longer time scales (months to several years) so less historical events are available, and the drought severity and associate damage depends on a combination of variables with no clear prevalence (e.g., total water deficit, maximum deficit and duration). As part of drought risk analysis, which aims to provide insight into the variability of hydrological conditions and associated socio-economic impacts, long synthetic time series should therefore be developed. In this contribution, we increase the length of the available inflow time series using stochastic autoregressive modelling. This enhancement could improve the characterization of the extreme range and can define extreme droughts with similar periods of return but different patterns that can lead to distinctly different damages. The methodology consists of: 1) fitting an autoregressive model (AR, ARMA…) to the available records; 2) generating extended time series (thousands of years); 3) performing a frequency analysis with different characteristic variables (total, deficit, maximum deficit and so on); and 4) selecting extreme drought events associated with different characteristic variables and return periods. The methodology was applied to the Rhine river discharge at location Lobith, where the Rhine enters The Netherlands. A monthly ARMA(1,1) autoregressive model with seasonally varying parameters was fitted and successfully validated to the historical records available since year 1901. The maximum monthly deficit with respect to a threshold value of 1800 m3/s and the average discharge for a given time span in m3/s were chosen as indicators to identify drought periods. A synthetic series of 10,000 years of discharges was generated using the validated ARMA model. Two time spans were considered in the analysis: the whole calendar year and the half-year period between April and September

  20. Stochastic macromodel of magnetic tunnel junction resistance variation and critical current dependence on resistance variation for SPICE simulation

    Science.gov (United States)

    Choi, Juntae; Song, Yunheub

    2017-04-01

    The resistance distribution of a magnetic tunnel junction (MTJ) shows nonuniformity according to various MTJ parameters. Moreover, this resistance variation leads to write-current density variation, which can cause serious problems when designing peripheral circuits for spin transfer torque magnetoresistance random access memory (STT-MRAM) and commercializing gigabit STT-MRAM. Therefore, a macromodel of MTJ including resistance, tunneling magnetoresistance ratio (TMR), and critical current variations is required for circuit designers to design MRAM peripheral circuits, that can overcome the various effects of the variations, such as write failure and read failure, and realize STT-MRAM. In this study, we investigated a stochastic behavior macromodel of the write current dependence on the MTJ resistance variation. The proposed model can possibly be used to analyze the write current density in relation to the resistance and TMR variations of MTJ with various parameter variations. It can be very helpful for designing STT-MRAM circuits and simulating the operation of STT-MRAM devices considering MTJ variations.

  1. Optimal appointment scheduling with a stochastic server: Simulation based K-steps look-ahead selection method

    Directory of Open Access Journals (Sweden)

    Changchun Liu

    2018-10-01

    Full Text Available This paper studies the problem of scheduling a finite set of customers with stochastic service times for a single-server system. The objective is to minimize the waiting time of customers, the idle time of the server, and the lateness of the schedule. Because of the NP-hardness of the problem, the optimal schedule is notoriously hard to derive with reasonable computation times. Therefore, we develop a simulation based K-steps look-ahead selection method which can result in nearly optimal schedules within reasonable computation times. Furthermore, we study the different distributed service times, e.g., Exponential, Weibull and lognormal distribution and the results show that the proposed algorithm can obtain better results than the lag order approximation method proposed by Vink et al. (2015 [Vink, W., Kuiper, A., Kemper, B., & Bhulai, S. (2015. Optimal appointment scheduling in continuous time: The lag order approximation method. European Journal of Operational Research, 240(1, 213-219.]. Finally, a realistic appointment scheduling includes experiments to verify the good performance of the proposed method.

  2. Mixed analytical-stochastic simulation method for the recovery of a Brownian gradient source from probability fluxes to small windows.

    Science.gov (United States)

    Dobramysl, U; Holcman, D

    2018-02-15

    Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.

  3. PTC simulations, stochastic optimization and safety strategies for groundwater pumping management: case study of the Hersonissos Coastal Aquifer in Crete

    Science.gov (United States)

    Stratis, P. N.; Dokou, Z. A.; Karatzas, G. P.; Papadopoulou, E. P.; Saridakis, Y. G.

    2017-09-01

    Recently, the well-known Princeton Transport Code (PTC), a groundwater flow and contaminant transport simulator, has been coupled with the ALgorithm of Pattern EXtraction (ALOPEX), a real-time stochastic optimization method, to provide a freshwater pumping management tool for coastal aquifers, aiming in preventing saltwater intrusion. In our previous work (Proceedings of INASE/CSCC-WHH 2015, Recent Advances in Environmental and Earth Sciences and Economics, pp 329-334, 2015), the PTC-ALOPEX approach was used in studying the saltwater contamination problem for the coastal aquifer at Hersonissos, Crete. Extending these results, in the present study the PTC-ALOPEX approach is equipped with a nodal safety strategy that effectively controls saltwater front's advancement inside the aquifer. In cooperation with an appropriate penalty system, the performance of PTC-ALOPEX algorithm is studied considering several pumping and weather condition scenarios. This study also establishes pumping/well scenarios that ensure the needed volume of fresh water to the local community without risking saltwater contamination.

  4. Using simulated wetlands to assess acid mine drainage treatment efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Karathanasis, A.D. (Kentucky University, Lexington, KY (USA). Dept. of Agronomy)

    1992-01-01

    The elevated acidity and increased solubility of toxic metals such as Al, Fe, Mn, Cu, and Zn associated with acid mine drainage (AMD) is causing increasing concern about possible toxic effects to plants, aquatic life, animals, and even humans in the coal regions of the Appalachian states. During the last few years, a new technology for treatment of acid mine discharges has emerged. The technology involves the construction of artificial wetlands with dominant vegetation [ital Typha] (cattails), [ital Sphagnum] (moss), certain algae, and other plant species which have the potential to treat small flows of acid mine water moving through them. Greenhouse experiments at the University of Kentucky involving AMD treatment by surface and subsurface flow simulated wetlands with [ital Typha] plants (cattails) grown in six different substrates showed significant differences among substrates in their ability to buffer mine drainage acidity and in metal removal efficiency. Of the six substrates tested, the pine-needle-and-hay mixtures were the most efficient over the peat, sphagnum moss and mine spoil mixtures in reducing acidity and Al, Cu and Zn concentrations in surface effluents. The majority of the substrates (except the mine spoil surface soil) showed similar efficiencies for Fe removal, but all substrates were a source rather than a sink for Mn in surface and subsurface flows. The results of this study suggest that short-term screenings of various substrate materials under greenhouse conditions such as those used in this investigation may provide useful information on the selection of the most efficient substrate for the metal(s) of interest and thus, help improve the performance of the wetland under construction. 2 figs.

  5. Efficient Eulerian gyrokinetic simulations with block-structured grids

    Energy Technology Data Exchange (ETDEWEB)

    Jarema, Denis

    2017-01-20

    Gaining a deep understanding of plasma microturbulence is of paramount importance for the development of future nuclear fusion reactors, because it causes a strong outward transport of heat and particles. Gyrokinetics has proven itself as a valid mathematical model to simulate such plasma microturbulence effects. In spite of the advantages of this model, nonlinear radially extended (or global) gyrokinetic simulations are still extremely computationally expensive, involving a very large number of computational grid points. Hence, methods that reduce the number of grid points without a significant loss of accuracy are a prerequisite to be able to run high-fidelity simulations. At the level of the mathematical model, the gyrokinetic approach achieves a reduction from six to five coordinates in comparison to the fully kinetic models. This reduction leads to an important decrease in the total number of computational grid points. However, the velocity space mixed with the radial direction still requires a very fine resolution in grid based codes, due to the disparities in the thermal speed, which are caused by a strong temperature variation along the radial direction. An attempt to address this problem by modifying the underlying gyrokinetic set of equations leads to additional nonlinear terms, which are the most expensive parts to simulate. Furthermore, because of these modifications, well-established and computationally efficient implementations developed for the original set of equations can no longer be used. To tackle such issues, in this thesis we introduce an alternative approach of blockstructured grids. This approach reduces the number of grid points significantly, but without changing the underlying mathematical model. Furthermore, our technique is minimally invasive and allows the reuse of a large amount of already existing code using rectilinear grids, modifications being necessary only on the block boundaries. Moreover, the block-structured grid can be

  6. ANALYSIS OF EFFICIENCY OF R&D ACTIVITIES AMONG COUNTRIES WITH DEVELOPED AND DEVELOPING ECONOMIES INCLUDING REPUBLIC OF BELARUS WITH STOCHASTIC FRONTIER APPROACH

    Directory of Open Access Journals (Sweden)

    I. V. Zhukovski

    2016-01-01

    Full Text Available This study evaluates efficiency of R&D activities based on the stochastic frontier analysis across 69 counties with developed and developing economies. Gross domestic expenditures on R&D in purchasing power parity, researchers per million inhabitants, technicians per million inhabitants are treated as inputs while patents granted to residents and scientific and technical journal articles are considered as outputs. According to the analysis results Costa Rica, Israel and Singapore are the most efficient in terms of transformation of available resources into the R&D results. What concerns Belarus it is necessary that additional investments in R&D go together with increasing efficiency of available resources’ usage. 

  7. Rare event simulation for processes generated via stochastic fixed point equations

    DEFF Research Database (Denmark)

    Collamore, Jeffrey F.; Diao, Guoqing; Vidyashankar, Anand N.

    2014-01-01

    In a number of applications, particularly in financial and actuarial mathematics, it is of interest to characterize the tail distribution of a random variable V satisfying the distributional equation V=_D f(V), for some random function f. This paper is concerned with computational methods...... for evaluating these tail probabilities. We introduce a novel importance sampling algorithm, involving an exponential shift over a random time interval, for estimating these rare event probabilities. We prove that the proposed estimator is: (i) consistent, (ii) strongly efficient and (iii) optimal within a wide...

  8. Stochastic backscatter modelling for the prediction of pollutant removal from an urban street canyon: A large-eddy simulation

    Science.gov (United States)

    O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.

    2016-10-01

    The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.

  9. Simulations of DSB Yields and Radiation-induced Chromosomal Aberrations in Human Cells Based on the Stochastic Track Structure iIduced by HZE Particles

    Science.gov (United States)

    Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu

    2014-01-01

    The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.

  10. Simulations of DSB Yields and Radiation-induced Chromosomal Aberrations in Human Cells Based on the Stochastic Track Structure Induced by HZE Particles

    Science.gov (United States)

    Ponomarev, Artem; Plante, Ianik; George, Kerry; Wu, Honglu

    2014-01-01

    The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a new particle track and DNA damage model, in which the particle stochastic track structure is combined with the random walk (RW) structure of chromosomes in a cell nucleus. The motivation for this effort stems from the fact that the model with the RW chromosomes, NASARTI (NASA radiation track image) previously relied on amorphous track structure, while the stochastic track structure model RITRACKS (Relativistic Ion Tracks) was focused on more microscopic targets than the entire genome. We have combined chromosomes simulated by RWs with stochastic track structure, which uses nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS in a voxelized space. The new simulations produce the number of DSBs as function of dose and particle fluence for high-energy particles, including iron, carbon and protons, using voxels of 20 nm dimension. The combined model also calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The joined computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The joined model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation. We found that the main advantage of the joined model is our ability to simulate small doses: 0.05-0.5 Gy. At such low doses, the stochastic track structure proved to be indispensable, as the action of individual delta-rays becomes more important.

  11. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  12. Stochastic simulation of power systems with integrated renewable and utility-scale storage resources

    Science.gov (United States)

    Degeilh, Yannick

    The push for a more sustainable electric supply has led various countries to adopt policies advocating the integration of renewable yet variable energy resources, such as wind and solar, into the grid. The challenges of integrating such time-varying, intermittent resources has in turn sparked a growing interest in the implementation of utility-scale energy storage resources ( ESRs), with MWweek storage capability. Indeed, storage devices provide flexibility to facilitate the management of power system operations in the presence of uncertain, highly time-varying and intermittent renewable resources. The ability to exploit the potential synergies between renewable and ESRs hinges on developing appropriate models, methodologies, tools and policy initiatives. We report on the development of a comprehensive simulation methodology that provides the capability to quantify the impacts of integrated renewable and ESRs on the economics, reliability and emission variable effects of power systems operating in a market environment. We model the uncertainty in the demands, the available capacity of conventional generation resources and the time-varying, intermittent renewable resources, with their temporal and spatial correlations, as discrete-time random processes. We deploy models of the ESRs to emulate their scheduling and operations in the transmission-constrained hourly day-ahead markets. To this end, we formulate a scheduling optimization problem (SOP) whose solutions determine the operational schedule of the controllable ESRs in coordination with the demands and the conventional/renewable resources. As such, the SOP serves the dual purpose of emulating the clearing of the transmission-constrained day-ahead markets (DAMs ) and scheduling the energy storage resource operations. We also represent the need for system operators to impose stricter ramping requirements on the conventional generating units so as to maintain the system capability to perform "load following'', i

  13. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  14. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    Science.gov (United States)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  15. Stochastic modeling

    CERN Document Server

    Lanchier, Nicolas

    2017-01-01

    Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...

  16. A Stochastic Sharpening Method for the Propagation of Phase Boundaries in Multiphase Lattice Boltzmann Simulations

    KAUST Repository

    Reis, T.

    2010-09-06

    Existing lattice Boltzmann models that have been designed to recover a macroscopic description of immiscible liquids are only able to make predictions that are quantitatively correct when the interface that exists between the fluids is smeared over several nodal points. Attempts to minimise the thickness of this interface generally leads to a phenomenon known as lattice pinning, the precise cause of which is not well understood. This spurious behaviour is remarkably similar to that associated with the numerical simulation of hyperbolic partial differential equations coupled with a stiff source term. Inspired by the seminal work in this field, we derive a lattice Boltzmann implementation of a model equation used to investigate such peculiarities. This implementation is extended to different spacial discretisations in one and two dimensions. We shown that the inclusion of a quasi-random threshold dramatically delays the onset of pinning and facetting.

  17. Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Yi Wu

    2010-02-01

    Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.

  18. Stochastic simulation of thermally assisted magnetization reversal in sub-100 nm dots with perpendicular anisotropy

    International Nuclear Information System (INIS)

    Purnama, Budi; Koga, Masashi; Nozaki, Yukio; Matsuyama, Kimihide

    2009-01-01

    Thermally assisted magnetization reversal of sub-100 nm dots with perpendicular anisotropy has been investigated using a micromagnetic Langevin model. The performance of the two different reversal modes of (i) a reduced barrier writing scheme and (ii) a Curie point writing scheme are compared. For the reduced barrier writing scheme, the switching field H swt decreases with an increase in writing temperature but is still larger than that of the Curie point writing scheme. For the Curie point writing scheme, the required threshold field H th , evaluated from 50 simulation results, saturates at a value, which is not simply related to the energy barrier height. The value of H th increases with a decrease in cooling time owing to the dynamic aspects of the magnetic ordering process. Dependence of H th on material parameters and dot sizes has been systematically studied

  19. Nonlinear Stochastic stability analysis of Wind Turbine Wings by Monte Carlo Simulations

    DEFF Research Database (Denmark)

    Larsen, Jesper Winther; Iwankiewiczb, R.; Nielsen, Søren R.K.

    2007-01-01

    and inertial contributions. A reduced two-degrees-of-freedom modal expansion is used specifying the modal coordinate of the fundamental blade and edgewise fixed base eigenmodes of the beam. The rotating beam is subjected to harmonic and narrow-banded support point motion from the nacelle displacement...... under narrow-banded excitation, and it is shown that the qualitative behaviour of the strange attractor is very similar for the periodic and almost periodic responses, whereas the strange attractor for the chaotic case loses structure as the excitation becomes narrow-banded. Furthermore......, the characteristic behaviour of the strange attractor is shown to be identifiable by the so-called information dimension. Due to the complexity of the coupled nonlinear structural system all analyses are carried out via Monte Carlo simulations....

  20. Policy planning under uncertainty: efficient starting populations for simulation-optimization methods applied to municipal solid waste management.

    Science.gov (United States)

    Huang, Gordon H; Linton, Jonathan D; Yeomans, Julian Scott; Yoogalingam, Reena

    2005-10-01

    Evolutionary simulation-optimization (ESO) techniques can be adapted to model a wide variety of problem types in which system components are stochastic. Grey programming (GP) methods have been previously applied to numerous environmental planning problems containing uncertain information. In this paper, ESO is combined with GP for policy planning to create a hybrid solution approach named GESO. It can be shown that multiple policy alternatives meeting required system criteria, or modelling-to-generate-alternatives (MGA), can be quickly and efficiently created by applying GESO to this case data. The efficacy of GESO is illustrated using a municipal solid waste management case taken from the regional municipality of Hamilton-Wentworth in the Province of Ontario, Canada. The MGA capability of GESO is especially meaningful for large-scale real-world planning problems and the practicality of this procedure can easily be extended from MSW systems to many other planning applications containing significant sources of uncertainty.

  1. Digital simulation of an arbitrary stationary stochastic process by spectral representation

    DEFF Research Database (Denmark)

    Yura, Harold T.; Hanson, Steen Grüner

    2011-01-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set...... of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited...... to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does...

  2. Determining efficient temperature sets for the simulated tempering method

    Science.gov (United States)

    Valentim, A.; da Luz, M. G. E.; Fiore, Carlos E.

    2014-07-01

    In statistical physics, the efficiency of tempering approaches strongly depends on ingredients such as the number of replicas R, reliable determination of weight factors and the set of used temperatures, TR = {T1 ,T2 , … ,TR } . For the simulated tempering (ST) in particular-useful due to its generality and conceptual simplicity-the latter aspect (closely related to the actual R) may be a key issue in problems displaying metastability and trapping in certain regions of the phase space. To determine TR's leading to accurate thermodynamics estimates and still trying to minimize the simulation computational time, here a fixed exchange frequency scheme is considered for the ST. From the temperature of interest T1, successive T's are chosen so that the exchange frequency between any adjacent pair Tr and Tr+1 has a same value f. By varying the f's and analyzing the TR's through relatively inexpensive tests (e.g., time decay towards the steady regime), an optimal situation in which the simulations visit much faster and more uniformly the relevant portions of the phase space is determined. As illustrations, the proposal is applied to three lattice models, BEG, Bell-Lavis, and Potts, in the hard case of extreme first-order phase transitions, always giving very good results, even for R = 3. Also, comparisons with other protocols (constant entropy and arithmetic progression) to choose the set TR are undertaken. The fixed exchange frequency method is found to be consistently superior, specially for small R's. Finally, distinct instances where the prescription could be helpful (in second-order transitions and for the parallel tempering approach) are briefly discussed.

  3. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  4. A comparative study of a stochastic and deterministic simulation of strong ground motion applied to the Kozani-Grevena (NW Greece 1995 sequence

    Directory of Open Access Journals (Sweden)

    C. Papaioannou

    2000-06-01

    Full Text Available We present the results of a comparative study of two intrinsically different methodologies, a stochastic one and a deterministic one, performed to simulate strong ground motion in the Kozani area (NW Greece. Source parameters were calculated from empirical relations in order to check their reliability, in combination with the applied methodologies, to simulate future events. Strong ground motion from the Kozani mainshock (13 May, 1995, M w = 6.5 was synthesized by using both the stochastic method for finite-fault cases and the empirical Green’s function method. The latter method was also applied to simulate a Mw = 5.1 aftershock (19 May, 1995. The results of the two simulations computed for the mainshock are quite satisfactory for both methodologies at the frequencies of engineering interest (> ~ 2 Hz. This strengthens the idea of incorporating proper empirical relations for the estimation of source parameters in a priori simulations of strong ground motion from future earthquakes. Nevertheless, the results of the simulation of the smaller earthquake point out the need for further investigation of regional or local, if possible, relations for estimating source parameters at smaller magnitude ranges

  5. First-passage dynamics of linear stochastic interface models: numerical simulations and entropic repulsion effect

    Science.gov (United States)

    Gross, Markus

    2018-03-01

    A fluctuating interfacial profile in one dimension is studied via Langevin simulations of the Edwards–Wilkinson equation with non-conserved noise and the Mullins–Herring equation with conserved noise. The profile is subject to either periodic or Dirichlet (no-flux) boundary conditions. We determine the noise-driven time-evolution of the profile between an initially flat configuration and the instant at which the profile reaches a given height M for the first time. The shape of the averaged profile agrees well with the prediction of weak-noise theory (WNT), which describes the most-likely trajectory to a fixed first-passage time. Furthermore, in agreement with WNT, on average the profile approaches the height M algebraically in time, with an exponent that is essentially independent of the boundary conditions. However, the actual value of the dynamic exponent turns out to be significantly smaller than predicted by WNT. This ‘renormalization’ of the exponent is explained in terms of the entropic repulsion exerted by the impenetrable boundary on the fluctuations of the profile around its most-likely path. The entropic repulsion mechanism is analyzed in detail for a single (fractional) Brownian walker, which describes the anomalous diffusion of a tagged monomer of the interface as it approaches the absorbing boundary. The present study sheds light on the accuracy and the limitations of the weak-noise approximation for the description of the full first-passage dynamics.

  6. Stochastic analysis of the efficiency of coupled hydraulic-physical barriers to contain solute plumes in highly heterogeneous aquifers

    Science.gov (United States)

    Pedretti, Daniele; Masetti, Marco; Beretta, Giovanni Pietro

    2017-10-01

    The expected long-term efficiency of vertical cutoff walls coupled to pump-and-treat technologies to contain solute plumes in highly heterogeneous aquifers was analyzed. A well-characterized case study in Italy, with a hydrogeological database of 471 results from hydraulic tests performed on the aquifer and the surrounding 2-km-long cement-bentonite (CB) walls, was used to build a conceptual model and assess a representative remediation site adopting coupled technologies. In the studied area, the aquifer hydraulic conductivity Ka [m/d] is log-normally distributed with mean E (Ya) = 0.32 , variance σYa2 = 6.36 (Ya = lnKa) and spatial correlation well described by an exponential isotropic variogram with integral scale less than 1/12 the domain size. The hardened CB wall's hydraulic conductivity, Kw [m/d], displayed strong scaling effects and a lognormal distribution with mean E (Yw) = - 3.43 and σYw2 = 0.53 (Yw =log10Kw). No spatial correlation of Kw was detected. Using this information, conservative transport was simulated across a CB wall in spatially correlated 1-D random Ya fields within a numerical Monte Carlo framework. Multiple scenarios representing different Kw values were tested. A continuous solute source with known concentration and deterministic drains' discharge rates were assumed. The efficiency of the confining system was measured by the probability of exceedance of concentration over a threshold (C∗) at a control section 10 years after the initial solute release. It was found that the stronger the aquifer heterogeneity, the higher the expected efficiency of the confinement system and the lower the likelihood of aquifer pollution. This behavior can be explained because, for the analyzed aquifer conditions, a lower Ka generates more pronounced drawdown in the water table in the proximity of the drain and consequently a higher advective flux towards the confined area, which counteracts diffusive fluxes across the walls. Thus, a higher σYa2 results

  7. 2-D particle-in-cell simulations of high efficiency klystrons

    CERN Document Server

    Constable, David A; Burt, Graeme; Syratchev, Igor; Marchesin, Rodolphe; Baikov, Andrey Yu; Kowalczyk, Richard

    2016-01-01

    Currently, klystrons employing monotonic bunching offer efficiencies on the order of 70%. Through the use of the core oscillation electron bunching mechanism, numerical simulations have predicted klystrons with efficiencies up to 90%. In this paper, we present PIC simulations of such geometries operating at a frequency of 800 MHz, with efficiencies up to 83% predicted thus far.

  8. Efficient parallel CFD-DEM simulations using OpenMP

    Science.gov (United States)

    Amritkar, Amit; Deb, Surya; Tafti, Danesh

    2014-01-01

    The paper describes parallelization strategies for the Discrete Element Method (DEM) used for simulating dense particulate systems coupled to Computational Fluid Dynamics (CFD). While the field equations of CFD are best parallelized by spatial domain decomposition techniques, the N-body particulate phase is best parallelized over the number of particles. When the two are coupled together, both modes are needed for efficient parallelization. It is shown that under these requirements, OpenMP thread based parallelization has advantages over MPI processes. Two representative examples, fairly typical of dense fluid-particulate systems are investigated, including the validation of the DEM-CFD and thermal-DEM implementation with experiments. Fluidized bed calculations are performed on beds with uniform particle loading, parallelized with MPI and OpenMP. It is shown that as the number of processing cores and the number of particles increase, the communication overhead of building ghost particle lists at processor boundaries dominates time to solution, and OpenMP which does not require this step is about twice as fast as MPI. In rotary kiln heat transfer calculations, which are characterized by spatially non-uniform particle distributions, the low overhead of switching the parallelization mode in OpenMP eliminates the load imbalances, but introduces increased overheads in fetching non-local data. In spite of this, it is shown that OpenMP is between 50-90% faster than MPI.

  9. A stochastic pseudospectral and T-matrix algorithm for acoustic scattering by a class of multiple particle configurations

    International Nuclear Information System (INIS)

    Ganesh, M.; Hawkins, S.C.

    2013-01-01

    We consider absorption and scattering of acoustic waves from uncertain configurations comprising multiple two dimensional bodies with various material properties (sound-soft, sound-hard, absorbing and penetrable) and develop tools to address the problem of quantifying uncertainties in the acoustic cross sections of the configurations. The uncertainty arises because the locations and orientations of the particles in the configurations are described through random variables, and statistical moments of the far-fields induced by the stochastic configurations facilitate quantification of the uncertainty. We develop an efficient algorithm, based on a hybrid of the stochastic pseudospectral discretization (to truncate the infinite dimensional stochastic process) and an efficient stable truncated version of Waterman's T-matrix approach (for cost effective realization at each multiple particle configuration corresponding to the pseudospectral quadrature points) to simulate the statistical properties of the stochastic model. We demonstrate the efficiency of the algorithm for configurations with non-smooth and non-convex bodies with distinct material properties, and random locations and orientations with normal and log-normal distributions. -- Highlights: ► Uncertainty quantification (UQ) of stochastic multiple scattering models is considered. ► A novel hybrid algorithm combining deterministic and stochastic methods is developed. ► An exponentially accurate stable a priori estimate based T-matrix method is used. ► The stochastic approximation is a spectrally accurate discrete polynomial chaos method. ► Multiple stochastic particle simulations highlight efficiency of the UQ algorithm

  10. Stochastic wave-function simulation of irreversible emission processes for open quantum systems in a non-Markovian environment

    Science.gov (United States)

    Polyakov, Evgeny A.; Rubtsov, Alexey N.

    2018-02-01

    When conducting the numerical simulation of quantum transport, the main obstacle is a rapid growth of the dimension of entangled Hilbert subspace. The Quantum Monte Carlo simulation techniques, while being capable of treating the problems of high dimension, are hindered by the so-called "sign problem". In the quantum transport, we have fundamental asymmetry between the processes of emission and absorption of environment excitations: the emitted excitations are rapidly and irreversibly scattered away. Whereas only a small part of these excitations is absorbed back by the open subsystem, thus exercising the non-Markovian self-action of the subsystem onto itself. We were able to devise a method for the exact simulation of the dominant quantum emission processes, while taking into account the small backaction effects in an approximate self-consistent way. Such an approach allows us to efficiently conduct simulations of real-time dynamics of small quantum subsystems immersed in non-Markovian bath for large times, reaching the quasistationary regime. As an example we calculate the spatial quench dynamics of Kondo cloud for a bozonized Kodno impurity model.

  11. An efficient non hydrostatic dynamical care far high-resolution simulations down to the urban scale

    International Nuclear Information System (INIS)

    Bonaventura, L.; Cesari, D.

    2005-01-01

    Numerical simulations of idealized stratified flows aver obstacles at different spatial scales demonstrate the very general applicability and the parallel efficiency of a new non hydrostatic dynamical care far simulation of mesoscale flows aver complex terrain

  12. Efficient conformational space exploration in ab initio protein folding simulation.

    Science.gov (United States)

    Ullah, Ahammed; Ahmed, Nasif; Pappu, Subrata Dey; Shatabda, Swakkhar; Ullah, A Z M Dayem; Rahman, M Sohel

    2015-08-01

    Ab initio protein folding simulation largely depends on knowledge-based energy functions that are derived from known protein structures using statistical methods. These knowledge-based energy functions provide us with a good approximation of real protein energetics. However, these energy functions are not very informative for search algorithms and fail to distinguish the types of amino acid interactions that contribute largely to the energy function from those that do not. As a result, search algorithms frequently get trapped into the local minima. On the other hand, the hydrophobic-polar (HP) model considers hydrophobic interactions only. The simplified nature of HP energy function makes it limited only to a low-resolution model. In this paper, we present a strategy to derive a non-uniform scaled version of the real 20×20 pairwise energy function. The non-uniform scaling helps tackle the difficulty faced by a real energy function, whereas the integration of 20×20 pairwise information overcomes the limitations faced by the HP energy function. Here, we have applied a derived energy function with a genetic algorithm on discrete lattices. On a standard set of benchmark protein sequences, our approach significantly outperforms the state-of-the-art methods for similar models. Our approach has been able to explore regions of the conformational space which all the previous methods have failed to explore. Effectiveness of the derived energy function is presented by showing qualitative differences and similarities of the sampled structures to the native structures. Number of objective function evaluation in a single run of the algorithm is used as a comparison metric to demonstrate efficiency.

  13. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Science.gov (United States)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  14. Stochastic simulation of time-series models combined with geostatistics to predict water-table scenarios in a Guarani Aquifer System outcrop area, Brazil

    Science.gov (United States)

    Manzione, Rodrigo L.; Wendland, Edson; Tanikawa, Diego H.

    2012-11-01

    Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

  15. Efficient performance simulation of class D amplifier output stages

    DEFF Research Database (Denmark)

    Nyboe, Flemming; Risbo, Lars; Andreani, Pietro

    2005-01-01

    Straightforward simulation of amplifier distortion involves transient simulation of operation on a sine wave input signal, and a subsequent FFT of the output voltage. This approach is very slow on class D amplifiers, since the switching behavior forces simulation time steps that are many orders...... of magnitude smaller than the duration of one period of an audio sine wave. This work presents a method of simulating the amplifier transfer characteristic using a minimum amount of simulation time, and then deriving THD from the results....

  16. 3D discrete angiogenesis dynamic model and stochastic simulation for the assessment of blood perfusion coefficient and impact on heat transfer between nanoparticles and malignant tumors.

    Science.gov (United States)

    Yifat, Jonathan; Gannot, Israel

    2015-03-01

    Early detection of malignant tumors plays a crucial role in the survivability chances of the patient. Therefore, new and innovative tumor detection methods are constantly searched for. Tumor-specific magnetic-core nano-particles can be used with an alternating magnetic field to detect and treat tumors by hyperthermia. For the analysis of the method effectiveness, the bio-heat transfer between the nanoparticles and the tissue must be carefully studied. Heat diffusion in biological tissue is usually analyzed using the Pennes Bio-Heat Equation, where blood perfusion plays an important role. Malignant tumors are known to initiate an angiogenesis process, where endothelial cell migration from neighboring vasculature eventually leads to the formation of a thick blood capillary network around them. This process allows the tumor to receive its extensive nutrition demands and evolve into a more progressive and potentially fatal tumor. In order to assess the effect of angiogenesis on the bio-heat transfer problem, we have developed a discrete stochastic 3D model & simulation of tumor-induced angiogenesis. The model elaborates other angiogenesis models by providing high resolution 3D stochastic simulation, capturing of fine angiogenesis morphological features, effects of dynamic sprout thickness functions, and stochastic parent vessel generator. We show that the angiogenesis realizations produced are well suited for numerical bio-heat transfer analysis. Statistical study on the angiogenesis characteristics was derived using Monte Carlo simulations. According to the statistical analysis, we provide analytical expression for the blood perfusion coefficient in the Pennes equation, as a function of several parameters. This updated form of the Pennes equation could be used for numerical and analytical analyses of the proposed detection and treatment method. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. The Stochastic-Deterministic Transition in Discrete Fracture Network Models and its Implementation in a Safety Assessment Application by Means of Conditional Simulation

    Science.gov (United States)

    Selroos, J. O.; Appleyard, P.; Bym, T.; Follin, S.; Hartley, L.; Joyce, S.; Munier, R.

    2015-12-01

    In 2011 the Swedish Nuclear Fuel and Waste Management Company (SKB) applied for a license to start construction of a final repository for spent nuclear fuel at Forsmark in Northern Uppland, Sweden. The repository is to be built at approximately 500 m depth in crystalline rock. A stochastic, discrete fracture network (DFN) concept was chosen for interpreting the surface-based (incl. boreholes) data, and for assessing the safety of the repository in terms of groundwater flow and flow pathways to and from the repository. Once repository construction starts, also underground data such as tunnel pilot borehole and tunnel trace data will become available. It is deemed crucial that DFN models developed at this stage honors the mapped structures both in terms of location and geometry, and in terms of flow characteristics. The originally fully stochastic models will thus increase determinism towards the repository. Applying the adopted probabilistic framework, predictive modeling to support acceptance criteria for layout and disposal can be performed with the goal of minimizing risks associated with the repository. This presentation describes and illustrates various methodologies that have been developed to condition stochastic realizations of fracture networks around underground openings using borehole and tunnel trace data, as well as using hydraulic measurements of inflows or hydraulic interference tests. The methodologies, implemented in the numerical simulators ConnectFlow and FracMan/MAFIC, are described in some detail, and verification tests and realistic example cases are shown. Specifically, geometric and hydraulic data are obtained from numerical synthetic realities approximating Forsmark conditions, and are used to test the constraining power of the developed methodologies by conditioning unconditional DFN simulations following the same underlying fracture network statistics. Various metrics are developed to assess how well the conditional simulations compare to

  18. Markov random fields simulation: an introduction to the stochastic modelling of petroleum reservoirs; Simulacao de campos aleatorios markovianos: uma introducao voltada a modelagem estocastica de reservatorios de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Saldanha Filho, Paulo Carlos

    1998-02-01

    Stochastic simulation has been employed in petroleum reservoir characterization as a modeling tool able to reconcile information from several different sources. It has the ability to preserve the variability of the modeled phenomena and permits transference of geological knowledge to numerical models of flux, whose predictions on reservoir constitute the main basis for reservoir management decisions. Several stochastic models have been used and/or suggested, depending on the nature of the phenomena to be described. Markov Random Fields (MRFs) appear as an alternative for the modeling of discrete variables, mainly reservoirs with mosaic architecture of facies. In this dissertation, the reader is introduced to the stochastic modeling by MRFs in a generic sense. The main aspects of the technique are reviewed. MRF Conceptual Background is described: its characterization through the Markovian property and the equivalence to Gibbs distributions. The framework for generic modeling of MRFs is described. The classical models of Ising and Potts-Strauss are specific in this context and are related to models of Ising and Potts-Strauss are specific in this context and are related to models used in petroleum reservoir characterization. The problem of parameter estimation is discussed. The maximum pseudolikelihood estimators for some models are presented. Estimators for two models useful for reservoir characterization are developed, and represent a new contribution to the subject. Five algorithms for the Conditional Simulation of MRFs are described: the Metropolis algorithm, the algorithm of German and German (Gibbs sampler), the algorithm of Swendsen-Wang, the algorithm of Wolff, and the algorithm of Flinn. Finally, examples of simulation for some of the models discussed are presented, along with their implications on the modelling of petroleum reservoirs. (author)

  19. Stochastic Approximation

    Indian Academy of Sciences (India)

    IAS Admin

    V S Borkar is the Institute. Chair Professor of. Electrical Engineering at. IIT Bombay. His research interests are stochastic optimization, theory, algorithms and applica- tions. 1 'Markov Chain Monte Carlo' is another one (see [1]), not to mention schemes that combine both. Stochastic approximation is one of the unsung.

  20. Assessment of Stochastic Capacity Consumption in Railway Networks

    DEFF Research Database (Denmark)

    Jensen, Lars Wittrup; Landex, Alex; Nielsen, Otto Anker

    2015-01-01

    The railway industry continuously strive to reduce cost and utilise resources optimally. Thus, there is a demand for tools that are able to fast and efficiently provide decision-makers with solutions that can help them achieve their goals. In strategic planning of capacity, this translates...... in networks where a timetable is not needed as input. We account for robustness using a stochastic simulation of delays to obtain the stochastic capacity consumption in a network. The model is used on a case network where four different infrastructure scenarios are considered and both deterministic...... and stochastic capacity consumption results are obtained efficiently. The case study show that the results of capacity analysis depends on the size of the network considered. Furthermore, we find that the capacity gain in the case scenarios are greater when delays are considered compared to a deterministic...

  1. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  2. On asymptotically efficient simulation of large deviation probabilities.

    NARCIS (Netherlands)

    Dieker, A.B.; Mandjes, M.R.H.

    2005-01-01

    ABSTRACT: Consider a family of probabilities for which the decay is governed by a large deviation principle. To find an estimate for a fixed member of this family, one is often forced to use simulation techniques. Direct Monte Carlo simulation, however, is often impractical, particularly if the

  3. Efficient simulation of periodically forced reactors with radial gradients

    NARCIS (Netherlands)

    van de Rotten, Bart A.; Verduyn Lunel, Sjoerd M.; Bliek, A.

    2006-01-01

    The aim of this paper is to present a limited memory iterative method, called the Broyden Rank Reduction method, to simulate periodically forced processes in plug-flow reactors with radial gradients taken into account. The simulation of periodically forced processes in plug-flow reactors leads to

  4. An Efficient Modeling and Simulation of Quantum Key Distribution Protocols Using OptiSystem™

    OpenAIRE

    Abudhahir Buhari,; Zuriati Ahmad Zukarnain; Shamla K. Subramaniam,; Hishamuddin Zainuddin; Suhairi Saharudin

    2012-01-01

    In this paper, we propose a modeling and simulation framework for quantum key distribution protocols using commercial photonic simulator OptiSystem™. This simulation framework emphasize on experimental components of quantum key distribution. We simulate BB84 operation with several security attacks scenario and noise immune key distribution in this work. We also investigate the efficiency of simulator’s in-built photonic components in terms of experimental configuration. This simulation provid...

  5. Stochastic dynamics and combinatorial optimization

    Science.gov (United States)

    Ovchinnikov, Igor V.; Wang, Kang L.

    2017-11-01

    Natural dynamics is often dominated by sudden nonlinear processes such as neuroavalanches, gamma-ray bursts, solar flares, etc., that exhibit scale-free statistics much in the spirit of the logarithmic Ritcher scale for earthquake magnitudes. On phase diagrams, stochastic dynamical systems (DSs) exhibiting this type of dynamics belong to the finite-width phase (N-phase for brevity) that precedes ordinary chaotic behavior and that is known under such names as noise-induced chaos, self-organized criticality, dynamical complexity, etc. Within the recently proposed supersymmetric theory of stochastic dynamics, the N-phase can be roughly interpreted as the noise-induced “overlap” between integrable and chaotic deterministic dynamics. As a result, the N-phase dynamics inherits the properties of the both. Here, we analyze this unique set of properties and conclude that the N-phase DSs must naturally be the most efficient optimizers: on one hand, N-phase DSs have integrable flows with well-defined attractors that can be associated with candidate solutions and, on the other hand, the noise-induced attractor-to-attractor dynamics in the N-phase is effectively chaotic or aperiodic so that a DS must avoid revisiting solutions/attractors thus accelerating the search for the best solution. Based on this understanding, we propose a method for stochastic dynamical optimization using the N-phase DSs. This method can be viewed as a hybrid of the simulated and chaotic annealing methods. Our proposition can result in a new generation of hardware devices for efficient solution of various search and/or combinatorial optimization problems.

  6. Stochastic processes

    CERN Document Server

    Parzen, Emanuel

    1962-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  7. Stochastic integrals

    CERN Document Server

    McKean, Henry P

    2005-01-01

    This little book is a brilliant introduction to an important boundary field between the theory of probability and differential equations. -E. B. Dynkin, Mathematical Reviews This well-written book has been used for many years to learn about stochastic integrals. The book starts with the presentation of Brownian motion, then deals with stochastic integrals and differentials, including the famous Itô lemma. The rest of the book is devoted to various topics of stochastic integral equations, including those on smooth manifolds. Originally published in 1969, this classic book is ideal for supplemen

  8. Mathematic simulation of high-efficiency process of grain cleaning

    Science.gov (United States)

    Galyautdinova, Y. V.; Gaysin, I. A.; Samigullin, A. D.; Samigullina, A. R.; Galyautdinov, R. R.

    2017-09-01

    The article presents the results of field experiment and the results of computer simulation of the grain cleaning process pneumosorting machine PSM-0,5. The results of the comparison of two methods of research are presented.

  9. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad

    2016-09-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method to replace correlations and equations of state in subsurface flow simulators. In order to accelerate MC simulations, a set of early rejection schemes (conservative, hybrid, and non-conservative) in addition to extrapolation methods through reweighting and reconstruction of pre-generated MC Markov chains were developed. Furthermore, an extensive study was conducted to investigate sorption and transport processes of methane, carbon dioxide, water, and their mixtures in the inorganic part of shale using both MC and MD simulations. These simulations covered a wide range of thermodynamic conditions, pore sizes, and fluid compositions shedding light on several interesting findings. For example, the possibility to have more carbon dioxide adsorbed with more preadsorbed water concentrations at relatively large basal spaces. The dissertation is divided into four chapters. The first chapter corresponds to the introductory part where a brief background about molecular simulation and motivations are given. The second chapter is devoted to discuss the theoretical aspects and methodology of the proposed MC speeding up techniques in addition to the corresponding results leading to the successful multi-scale simulation of the compressible single-phase flow scenario. In chapter 3, the results regarding our extensive study on shale gas at laboratory conditions are reported. At the fourth and last chapter, we end the dissertation with few concluding remarks highlighting the key findings and summarizing the future directions.

  10. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  11. RITRACKS: A Software for Simulation of Stochastic Radiation Track Structure, Micro and Nanodosimetry, Radiation Chemistry and DNA Damage for Heavy Ions

    Science.gov (United States)

    Plante, I; Wu, H

    2014-01-01

    The code RITRACKS (Relativistic Ion Tracks) has been developed over the last few years at the NASA Johnson Space Center to simulate the effects of ionizing radiations at the microscopic scale, to understand the effects of space radiation at the biological level. The fundamental part of this code is the stochastic simulation of radiation track structure of heavy ions, an important component of space radiations. The code can calculate many relevant quantities such as the radial dose, voxel dose, and may also be used to calculate the dose in spherical and cylindrical targets of various sizes. Recently, we have incorporated DNA structure and damage simulations at the molecular scale in RITRACKS. The direct effect of radiations is simulated by introducing a slight modification of the existing particle transport algorithms, using the Binary-Encounter-Bethe model of ionization cross sections for each molecular orbitals of DNA. The simulation of radiation chemistry is done by a step-by-step diffusion-reaction program based on the Green's functions of the diffusion equation]. This approach is also used to simulate the indirect effect of ionizing radiation on DNA. The software can be installed independently on PC and tablets using the Windows operating system and does not require any coding from the user. It includes a Graphic User Interface (GUI) and a 3D OpenGL visualization interface. The calculations are executed simultaneously (in parallel) on multiple CPUs. The main features of the software will be presented.

  12. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    Energy Technology Data Exchange (ETDEWEB)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  13. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    Science.gov (United States)

    Kotalczyk, G.; Kruis, F. E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named 'stochastic resolution' in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named 'random removal' in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  14. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: Case study

    International Nuclear Information System (INIS)

    Bieda, Bogusław

    2014-01-01

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry

  15. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: Case study

    Energy Technology Data Exchange (ETDEWEB)

    Bieda, Bogusław

    2014-05-01

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.

  16. HIGH JET EFFICIENCY AND SIMULATIONS OF BLACK HOLE MAGNETOSPHERES

    International Nuclear Information System (INIS)

    Punsly, Brian

    2011-01-01

    This Letter reports on a growing body of observational evidence that many powerful lobe-dominated (FR II) radio sources likely have jets with high efficiency. This study extends the maximum efficiency line (jet power ∼25 times the thermal luminosity) defined in Fernandes et al. so as to span four decades of jet power. The fact that this line extends over the full span of FR II radio power is a strong indication that this is a fundamental property of jet production that is independent of accretion power. This is a valuable constraint for theorists. For example, the currently popular 'no-net-flux' numerical models of black hole accretion produce jets that are two to three orders of magnitude too weak to be consistent with sources near maximum efficiency.

  17. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  18. Design and simulations of highly efficient single-photon sources

    DEFF Research Database (Denmark)

    Gregersen, Niels; de Lasson, Jakob Rosenkrantz; Mørk, Jesper

    The realization of the highly-efficient single-photon source represents not only an experimental, but also a numerical challenge. We will present the theory of the waveguide QED approach, the design challenges and the current limitations. Additionally, the important numerical challenges in the si......The realization of the highly-efficient single-photon source represents not only an experimental, but also a numerical challenge. We will present the theory of the waveguide QED approach, the design challenges and the current limitations. Additionally, the important numerical challenges...

  19. Simulation and Efficient Measurements of Intensities for Complex Imaging Sequences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    on the sequence to simulate both intensity and mechanical index (MI) according to FDA rules. A 3 MHz BK Medical 8820e convex array transducer is used with the SARUS scanner. An Onda HFL-0400 hydrophone and the Onda AIMS III system measures the pressure field for three imaging schemes: a fixed focus, single...

  20. Efficient dynamic simulation of flexible link manipulators with PID control

    NARCIS (Netherlands)

    Aarts, Ronald G.K.M.; Jonker, Jan B.; Mook, D.T.; Balachandran, B.

    2001-01-01

    For accurate simulations of the dynamic behavior of flexible manipulators the combination of a perturbation method and modal analysis is proposed. First, the vibrational motion is modeled as a first-order perturbation of a nominal rigid link motion. The vibrational motion is then described by a set

  1. Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.

    Science.gov (United States)

    Kim, Jihan; Smit, Berend

    2012-07-10

    Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.

  2. Duality quantum algorithm efficiently simulates open quantum systems

    Science.gov (United States)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  3. Land Reform, Efficiency and Rural Institutional Change: theory and stochastic frontier analysis with panel data (1998-2006)

    OpenAIRE

    Lambais, GBR; Silveira, JMFJ; Magalhães, MM

    2010-01-01

    This work deals with land reform in Brazil through an evolutionary and new institutional economics theoretical framework. Firstly, this theoretical underpinning delineates the existence of an intrinsic relation between asset equality and economic efficiency, going against the neoclassical trade-off. From this relation it is established that the utilization of society’s productive forces depends directly on institutional structures and propriety relations. In this sense, the way land reallocat...

  4. Tuning Monotonic Basin Hopping: Improving the Efficiency of Stochastic Search as Applied to Low-Thrust Trajectory Optimization

    Science.gov (United States)

    Englander, Jacob A.; Englander, Arnold C.

    2014-01-01

    Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.

  5. Multi types DG expansion dynamic planning in distribution system under stochastic conditions using Covariance Matrix Adaptation Evolutionary Strategy and Monte-Carlo simulation

    International Nuclear Information System (INIS)

    Sadeghi, Mahmood; Kalantar, Mohsen

    2014-01-01

    Highlights: • Defining a DG dynamic planning problem. • Applying a new evolutionary algorithm called “CMAES” in planning process. • Considering electricity price and fuel price variation stochastic conditions. • Scenario generation and reduction with MCS and backward reduction programs. • Considering approximately all of the costs of the distribution system. - Abstract: This paper presents a dynamic DG planning problem considering uncertainties related to the intermittent nature of the DG technologies such as wind turbines and solar units in addition to the stochastic economic conditions. The stochastic economic situation includes the uncertainties related to the fuel and electricity price of each year. The Monte Carlo simulation is used to generate the possible scenarios of uncertain situations and the produced scenarios are reduced through backward reduction program. The aim of this paper is to maximize the revenue of the distribution system through the benefit cost analysis alongside the encouraging and punishment functions. In order to close to reality, the different growth rates for the planning period are selected. In this paper the Covariance Matrix Adaptation Evolutionary Strategy is introduced and is used to find the best planning scheme of the DG units. The different DG types are considered in the planning problem. The main assumption of this paper is that the DISCO is the owner of the distribution system and the DG units. The proposed method is tested on a 9 bus test distribution system and the results are compared with the known genetic algorithm and PSO methods to show the applicability of the CMAES method in this problem

  6. An improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions

    CERN Document Server

    Kraft, M

    2003-01-01

    We propose an improved stochastic algorithm for temperature-dependent homogeneous gas phase reactions. By combining forward and reverse reaction rates, a significant gain in computational efficiency is achieved. Two modifications of modelling the temperature dependence (with and without conservation of enthalpy) are introduced and studied quantitatively. The algorithm is tested for the combustion of n-heptane, which is a reference fuel component for internal combustion engines. The convergence of the algorithm is studied by a series of numerical experiments and the computational cost of the stochastic algorithm is compared with the DAE code DASSL. If less accuracy is needed the stochastic algorithm is faster on short simulation time intervals. The new stochastic algorithm is significantly faster than the original direct simulation algorithm in all cases considered.

  7. Efficient heuristics for simulating rare events in queuing networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.

    2008-01-01

    In this thesis we propose state-dependent importance sampling heuristics to estimate the probability of population overflow in queuing networks. These heuristics capture state-dependence along the boundaries (when one or more queues are almost empty) which is crucial for the asymptotic efficiency of

  8. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    International Nuclear Information System (INIS)

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  9. Application of stochastic method to optimum design of energy-efficient induction motors with a target of LCC.

    Science.gov (United States)

    Fang, You-Tong; Fang, Cheng-Zhi; Ye, Yun-Yue; Chen, Yong-Xiao

    2003-01-01

    For an energy-efficient induction machine, the life-cycle cost (LCC) usually is the most important index to the consumer. With this target, the optimization design of a motor is a complex nonlinear problem with constraints. To solve the problem, the authors introduce a united random algorithm. At first, the problem is divided into two parts, the optimal rotor slots and the optimization of other dimensions. Before optimizing the rotor slots with genetic algorithm (GA), the second part is solved with TABU algorithm to simplify the problem. The numerical results showed that this method is better than the method using a traditional algorithm.

  10. Efficiency of geometric designs of flexible solar panels: mathematical simulation

    Science.gov (United States)

    Marciniak, Malgorzata; Hassebo, Yasser; Enriquez-Torres, Delfino; Serey-Roman, Maria Ignacia

    2017-09-01

    The purpose of this study is to analyze various surfaces of flexible solar panels and compare them to the traditional at panels mathematically. We evaluated the efficiency based on the integral formulas that involve flux. We performed calculations for flat panels with different positions, a cylindrical panel, conical panels with various opening angles and segments of a spherical panel. Our results indicate that the best efficiency per unit area belongs to particular segments of spherically-shaped panels. In addition, we calculated the optimal opening angle of a cone-shaped panel that maximizes the annual accumulation of the sun radiation per unit area. The considered shapes are presented below with a suggestion for connections of the cells.

  11. Stochastic techno-economic assessment based on Monte Carlo simulation and the Response Surface Methodology: The case of an innovative linear Fresnel CSP (concentrated solar power) system

    International Nuclear Information System (INIS)

    Bendato, Ilaria; Cassettari, Lucia; Mosca, Marco; Mosca, Roberto

    2016-01-01

    Combining technological solutions with investment profitability is a critical aspect in designing both traditional and innovative renewable power plants. Often, the introduction of new advanced-design solutions, although technically interesting, does not generate adequate revenue to justify their utilization. In this study, an innovative methodology is developed that aims to satisfy both targets. On the one hand, considering all of the feasible plant configurations, it allows the analysis of the investment in a stochastic regime using the Monte Carlo method. On the other hand, the impact of every technical solution on the economic performance indicators can be measured by using regression meta-models built according to the theory of Response Surface Methodology. This approach enables the design of a plant configuration that generates the best economic return over the entire life cycle of the plant. This paper illustrates an application of the proposed methodology to the evaluation of design solutions using an innovative linear Fresnel Concentrated Solar Power system. - Highlights: • A stochastic methodology for solar plants investment evaluation. • Study of the impact of new technologies on the investment results. • Application to an innovative linear Fresnel CSP system. • A particular application of Monte Carlo simulation and response surface methodology.

  12. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    Directory of Open Access Journals (Sweden)

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  13. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  14. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    Science.gov (United States)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  15. Simulation and evaluation of efficiency of active clamp dual flyback inverter for photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    Cernan, P. [Clayton Power Research and Development, Trencin (Slovakia); Dobrucky, B.; Sul, R. [Zilina Univ., Zilina (Slovakia)

    2010-03-09

    Dual flyback inverter (DFBI) is one of the preferred topologies for isolated low cost low power photovoltaic (PV) applications as it converts PV cells direct current (DC) voltage to output alternating current (AC) voltage using a single power stage. In order to reach the efficiency limits of this topology, it is important to understand the loss distribution of DFBI. This paper evaluated the efficiency of DFBI using computer simulation results of a Simetrix circuit simulator. The paper presented a circuit diagram of a dual flyback inverter with active clamp circuit and discussed the modelling and simulation of DFBI. The dimensioning of power stage components for DFBI simulation was presented. The results achieved by Simetrix simulation were also provided. The paper concluded with a discussion of silicon carbide diodes. It was concluded that the most important parameters for PV inverters are efficiency and cost. The size/power density of the inverter is not critical. 9 refs., 3 tabs., 6 figs.

  16. Numerical Solution of Stochastic Nonlinear Fractional Differential Equations

    KAUST Repository

    El-Beltagy, Mohamed A.

    2015-01-07

    Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.

  17. Photosynthetic efficiency of Pedunculate oak seedlings under simulated water stress

    Directory of Open Access Journals (Sweden)

    Popović Zorica

    2010-01-01

    Full Text Available Photosynthetic performance of seedlings of Quercus robur exposed to short-term water stress in the laboratory conditions was assessed through the method of induced fluorometry. The substrate for seedlings was clayey loam, with the dominant texture fraction made of silt, followed by clay and fine sand, with total porosity 68.2%. Seedlings were separated in two groups: control (C (soil water regime in pots was maintained at the level of field water capacity and treated (water-stressed, WS (soil water regime was maintained in the range of wilting point and lentocapillary capacity. The photosynthetic efficiency was 0.642±0.25 and 0.522±0.024 (WS and C, respectively, which was mostly due to transplantation disturbances and sporadic leaf chlorosis. During the experiment Fv/Fm decreased in both groups (0.551±0.0100 and 0.427±0.018 in C and WS, respectively. Our results showed significant differences between stressed and control group, in regard to both observed parameters (Fv/Fm and T½. Photosynthetic efficiency of pedunculate oak seedlings was significantly affected by short-term water stress, but to a lesser extent than by sufficient watering.

  18. On a multiscale approach for filter efficiency simulations

    KAUST Repository

    Iliev, Oleg

    2014-07-01

    Filtration in general, and the dead end depth filtration of solid particles out of fluid in particular, is intrinsic multiscale problem. The deposition (capturing of particles) essentially depends on local velocity, on microgeometry (pore scale geometry) of the filtering medium and on the diameter distribution of the particles. The deposited (captured) particles change the microstructure of the porous media what leads to change of permeability. The changed permeability directly influences the velocity field and pressure distribution inside the filter element. To close the loop, we mention that the velocity influences the transport and deposition of particles. In certain cases one can evaluate the filtration efficiency considering only microscale or only macroscale models, but in general an accurate prediction of the filtration efficiency requires multiscale models and algorithms. This paper discusses the single scale and the multiscale models, and presents a fractional time step discretization algorithm for the multiscale problem. The velocity within the filter element is computed at macroscale, and is used as input for the solution of microscale problems at selected locations of the porous medium. The microscale problem is solved with respect to transport and capturing of individual particles, and its solution is postprocessed to provide permeability values for macroscale computations. Results from computational experiments with an oil filter are presented and discussed.

  19. Temperature stochastic modeling and weather derivatives pricing ...

    African Journals Online (AJOL)

    ... over a sufficient period to apply a stochastic process that describes the evolution of the temperature. A numerical example of a swap contract pricing is presented, using an approximation formula as well as Monte Carlo simulations. Keywords: Weather derivatives, temperature stochastic model, Monte Carlo simulation.

  20. Stochastic convergence

    CERN Document Server

    Lukacs, Eugene; Lukacs, E

    1975-01-01

    Stochastic Convergence, Second Edition covers the theoretical aspects of random power series dealing with convergence problems. This edition contains eight chapters and starts with an introduction to the basic concepts of stochastic convergence. The succeeding chapters deal with infinite sequences of random variables and their convergences, as well as the consideration of certain sets of random variables as a space. These topics are followed by discussions of the infinite series of random variables, specifically the lemmas of Borel-Cantelli and the zero-one laws. Other chapters evaluate the po