HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
International Nuclear Information System (INIS)
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
International Nuclear Information System (INIS)
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
International Nuclear Information System (INIS)
Mohanta, Dusmanta Kumar; Sadhu, Pradip Kumar; Chakrabarti, R.
2007-01-01
This paper presents a comparison of results for optimization of captive power plant maintenance scheduling using genetic algorithm (GA) as well as hybrid GA/simulated annealing (SA) techniques. As utilities catered by captive power plants are very sensitive to power failure, therefore both deterministic and stochastic reliability objective functions have been considered to incorporate statutory safety regulations for maintenance of boilers, turbines and generators. The significant contribution of this paper is to incorporate stochastic feature of generating units and that of load using levelized risk method. Another significant contribution of this paper is to evaluate confidence interval for loss of load probability (LOLP) because some variations from optimum schedule are anticipated while executing maintenance schedules due to different real-life unforeseen exigencies. Such exigencies are incorporated in terms of near-optimum schedules obtained from hybrid GA/SA technique during the final stages of convergence. Case studies corroborate that same optimum schedules are obtained using GA and hybrid GA/SA for respective deterministic and stochastic formulations. The comparison of results in terms of interval of confidence for LOLP indicates that levelized risk method adequately incorporates the stochastic nature of power system as compared with levelized reserve method. Also the interval of confidence for LOLP denotes the possible risk in a quantified manner and it is of immense use from perspective of captive power plants intended for quality power
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Measures of thermodynamic irreversibility in deterministic and stochastic dynamics
International Nuclear Information System (INIS)
Ford, Ian J
2015-01-01
It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)
Stochastic and deterministic causes of streamer branching in liquid dielectrics
International Nuclear Information System (INIS)
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-01-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Functional Abstraction of Stochastic Hybrid Systems
Bujorianu, L.M.; Blom, Henk A.P.; Hermanns, H.
2006-01-01
The verification problem for stochastic hybrid systems is quite difficult. One method to verify these systems is stochastic reachability analysis. Concepts of abstractions for stochastic hybrid systems are needed to ease the stochastic reachability analysis. In this paper, we set up different ways
Hybrid Semantics of Stochastic Programs with Dynamic Reconfiguration
Directory of Open Access Journals (Sweden)
Alberto Policriti
2009-10-01
Full Text Available We begin by reviewing a technique to approximate the dynamics of stochastic programs --written in a stochastic process algebra-- by a hybrid system, suitable to capture a mixed discrete/continuous evolution. In a nutshell, the discrete dynamics is kept stochastic while the continuous evolution is given in terms of ODEs, and the overall technique, therefore, naturally associates a Piecewise Deterministic Markov Process with a stochastic program. The speciﬁc contribution in this work consists in an increase of the ﬂexibility of the translation scheme, obtained by allowing a dynamic reconﬁguration of the degree of discreteness/continuity of the semantics. We also discuss the relationships of this approach with other hybrid simulation strategies for biochemical systems.
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
Stochastic Reachability Analysis of Hybrid Systems
Bujorianu, Luminita Manuela
2012-01-01
Stochastic reachability analysis (SRA) is a method of analyzing the behavior of control systems which mix discrete and continuous dynamics. For probabilistic discrete systems it has been shown to be a practical verification method but for stochastic hybrid systems it can be rather more. As a verification technique SRA can assess the safety and performance of, for example, autonomous systems, robot and aircraft path planning and multi-agent coordination but it can also be used for the adaptive control of such systems. Stochastic Reachability Analysis of Hybrid Systems is a self-contained and accessible introduction to this novel topic in the analysis and development of stochastic hybrid systems. Beginning with the relevant aspects of Markov models and introducing stochastic hybrid systems, the book then moves on to coverage of reachability analysis for stochastic hybrid systems. Following this build up, the core of the text first formally defines the concept of reachability in the stochastic framework and then...
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
Stochastic effects in hybrid inflation
Martin, Jérôme; Vennin, Vincent
2012-02-01
Hybrid inflation is a two-field model where inflation ends due to an instability. In the neighborhood of the instability point, the potential is very flat and the quantum fluctuations dominate over the classical motion of the inflaton and waterfall fields. In this article, we study this regime in the framework of stochastic inflation. We numerically solve the two coupled Langevin equations controlling the evolution of the fields and compute the probability distributions of the total number of e-folds and of the inflation exit point. Then, we discuss the physical consequences of our results, in particular, the question of how the quantum diffusion can affect the observable predictions of hybrid inflation.
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the
Compositional Modelling of Stochastic Hybrid Systems
Strubbe, S.N.
2005-01-01
In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete
Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis
2018-01-01
Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Directory of Open Access Journals (Sweden)
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement...
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with
Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves
DEFF Research Database (Denmark)
Eldeberky, Y.; Madsen, Per A.
1999-01-01
and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...
Liu, Xiangdong; Li, Qingze; Pan, Jianxin
2018-06-01
Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
Multiscale Hy3S: Hybrid stochastic simulation for supercomputers
Directory of Open Access Journals (Sweden)
Kaznessis Yiannis N
2006-02-01
Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application
Chambolle, Antonin; Ehrhardt, Matthias J.; Richtarik, Peter; Schö nlieb, Carola-Bibiane
2017-01-01
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.
Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application
Chambolle, Antonin
2017-06-15
We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
2015-01-01
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......–Carter model will otherwise overestimate the reduction of mortality for the younger age groups and will underestimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee–Carter model instead of a one-factor model should be formulated as a two- (or several...
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated...
Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?
Choustova, Olga
2007-02-01
We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Deterministic and stochastic control of chimera states in delayed feedback oscillator
Energy Technology Data Exchange (ETDEWEB)
Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)
2016-06-08
Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
Simiu, Emil
2002-01-01
The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-01
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Duane, S.; Kogut, J.B.
1986-01-01
The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)
Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James
2014-08-01
The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.
Stochastic hybrid systems with renewal transitions
Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.
2010-01-01
We consider Stochastic Hybrid Systems (SHSs) for which the lengths of times that the system stays in each mode are independent random variables with given distributions. We propose an analysis framework based on a set of Volterra renewal-type equations, which allows us to compute any statistical
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied
Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.
2017-11-01
We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.
Directory of Open Access Journals (Sweden)
Rice Sean H
2008-09-01
Full Text Available Abstract Background Evolution involves both deterministic and random processes, both of which are known to contribute to directional evolutionary change. A number of studies have shown that when fitness is treated as a random variable, meaning that each individual has a distribution of possible fitness values, then both the mean and variance of individual fitness distributions contribute to directional evolution. Unfortunately the most general mathematical description of evolution that we have, the Price equation, is derived under the assumption that both fitness and offspring phenotype are fixed values that are known exactly. The Price equation is thus poorly equipped to study an important class of evolutionary processes. Results I present a general equation for directional evolutionary change that incorporates both deterministic and stochastic processes and applies to any evolving system. This is essentially a stochastic version of the Price equation, but it is derived independently and contains terms with no analog in Price's formulation. This equation shows that the effects of selection are actually amplified by random variation in fitness. It also generalizes the known tendency of populations to be pulled towards phenotypes with minimum variance in fitness, and shows that this is matched by a tendency to be pulled towards phenotypes with maximum positive asymmetry in fitness. This equation also contains a term, having no analog in the Price equation, that captures cases in which the fitness of parents has a direct effect on the phenotype of their offspring. Conclusion Directional evolution is influenced by the entire distribution of individual fitness, not just the mean and variance. Though all moments of individuals' fitness distributions contribute to evolutionary change, the ways that they do so follow some general rules. These rules are invisible to the Price equation because it describes evolution retrospectively. An equally general
Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared
Energy Technology Data Exchange (ETDEWEB)
Chouhan, S.L.; Davis, P
2002-04-01
The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs
Hybrid stochastic simplifications for multiscale gene networks
Directory of Open Access Journals (Sweden)
Debussche Arnaud
2009-09-01
Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.
Gomez, Christophe; Hartung, Niklas
2018-01-01
Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.
International Nuclear Information System (INIS)
Petrus Zacharias; Abdul Jami
2010-01-01
Researches conducted by Batan's researchers have resulted in a number competences that can be used to produce goods and services, which will be applied to industrial sector. However, there are difficulties how to convey and utilize the R and D products into industrial sector. Evaluation results show that each research result should be completed with techno-economy analysis to obtain the feasibility of a product for industry. Further analysis on multy-product concept, in which one business can produce many main products, will be done. For this purpose, a software package simulating techno-economy I economic feasibility which uses deterministic and stochastic data (Monte Carlo method) was been carried out for multi-product including side product. The programming language used in Visual Basic Studio Net 2003 and SQL as data base processing software. This software applied sensitivity test to identify which investment criteria is sensitive for the prospective businesses. Performance test (trial test) has been conducted and the results are in line with the design requirement, such as investment feasibility and sensitivity displayed deterministically and stochastically. These result can be interpreted very well to support business decision. Validation has been performed using Microsoft Excel (for single product). The result of the trial test and validation show that this package is suitable for demands and is ready for use. (author)
Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data
Larkin, Steven Paul
Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical Pm
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Kennedy, A.D.
1989-01-01
These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs
Directory of Open Access Journals (Sweden)
Wenying Yue
2014-01-01
Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.
International Nuclear Information System (INIS)
Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.
2003-01-01
In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term
International Nuclear Information System (INIS)
Solomon, S.I.; Harvey, K.D.
1982-12-01
The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed
International Nuclear Information System (INIS)
Solomon, S.I.; Harvey, K.D.; Asmis, G.J.K.
1983-01-01
The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rain fall runoff model may lead in some cases to non-conservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 to 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
Deterministic linear-optics quantum computing based on a hybrid approach
International Nuclear Information System (INIS)
Lee, Seung-Woo; Jeong, Hyunseok
2014-01-01
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources
Deterministic linear-optics quantum computing based on a hybrid approach
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
Hybrid approaches for multiple-species stochastic reaction–diffusion models
International Nuclear Information System (INIS)
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-01-01
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries
Hybrid approaches for multiple-species stochastic reaction–diffusion models
Energy Technology Data Exchange (ETDEWEB)
Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)
2015-10-15
Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.
International Nuclear Information System (INIS)
Deco, Gustavo; Marti, Daniel
2007-01-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Recursive stochastic effects in valley hybrid inflation
Levasseur, Laurence Perreault; Vennin, Vincent; Brandenberger, Robert
2013-10-01
Hybrid inflation is a two-field model where inflation ends because of a tachyonic instability, the duration of which is determined by stochastic effects and has important observational implications. Making use of the recursive approach to the stochastic formalism presented in [L. P. Levasseur, preceding article, Phys. Rev. D 88, 083537 (2013)], these effects are consistently computed. Through an analysis of backreaction, this method is shown to converge in the valley but points toward an (expected) instability in the waterfall. It is further shown that the quasistationarity of the auxiliary field distribution breaks down in the case of a short-lived waterfall. We find that the typical dispersion of the waterfall field at the critical point is then diminished, thus increasing the duration of the waterfall phase and jeopardizing the possibility of a short transition. Finally, we find that stochastic effects worsen the blue tilt of the curvature perturbations by an O(1) factor when compared with the usual slow-roll contribution.
Hybrid framework for the simulation of stochastic chemical kinetics
International Nuclear Information System (INIS)
Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos
2016-01-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
Hybrid framework for the simulation of stochastic chemical kinetics
Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos
2016-12-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
Hybrid framework for the simulation of stochastic chemical kinetics
Energy Technology Data Exchange (ETDEWEB)
Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk [Department of Mathematics, Imperial College, South Kensington Campus, London, SW7 2AZ (United Kingdom); Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk [School of Mathematics, University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD (United Kingdom)
2016-12-01
Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.
International Nuclear Information System (INIS)
Karvountzis-Kontakiotis, A.; Dimaratos, A.; Ntziachristos, L.; Samaras, Z.
2017-01-01
This study contributes to the understanding of cycle-to-cycle emissions variability (CEV) in premixed spark-ignition combustion engines. A number of experimental investigations of cycle-to-cycle combustion variability (CCV) exist in published literature; however only a handful of studies deal with CEV. This study experimentally investigates the impact of CCV on CEV of NO and CO, utilizing experimental results from a high-speed spark-ignition engine. Both CEV and CCV are shown to comprise a deterministic and a stochastic component. Results show that at maximum break torque (MBT) operation, the indicated mean effective pressure (IMEP) maximizes and its coefficient of variation (COV_I_M_E_P) minimizes, leading to minimum variation of NO. NO variability and hence mean NO levels can be reduced by more than 50% and 30%, respectively, at advanced ignition timing, by controlling the deterministic CCV using cycle resolved combustion control. The deterministic component of CEV increases at lean combustion (lambda = 1.12) and this overall increases NO variability. CEV was also found to decrease with engine load. At steady speed, increasing throttle position from 20% to 80%, decreased COV_I_M_E_P, COV_N_O and COV_C_O by 59%, 46%, and 6% respectively. Highly resolved engine control, by means of cycle-to-cycle combustion control, appears as key to limit the deterministic feature of cyclic variability and by that to overall reduce emission levels. - Highlights: • Engine emissions variability comprise both stochastic and deterministic components. • Lean and diluted combustion conditions increase emissions variability. • Advanced ignition timing enhances the deterministic component of variability. • Load increase decreases the deterministic component of variability. • The deterministic component can be reduced by highly resolved combustion control.
Hybrid Differential Dynamic Programming with Stochastic Search
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Directory of Open Access Journals (Sweden)
Scott Ferrenberg
2016-10-01
Full Text Available Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species and belowground (species active in organic and mineral soil layers arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community and modified Winkler funnels (belowground community and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the
Martinez, Alexander S.; Faist, Akasha M.
2016-01-01
Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod
Filtering and control of stochastic jump hybrid systems
Yao, Xiuming; Zheng, Wei Xing
2016-01-01
This book presents recent research work on stochastic jump hybrid systems. Specifically, the considered stochastic jump hybrid systems include Markovian jump Ito stochastic systems, Markovian jump linear-parameter-varying (LPV) systems, Markovian jump singular systems, Markovian jump two-dimensional (2-D) systems, and Markovian jump repeated scalar nonlinear systems. Some sufficient conditions are first established respectively for the stability and performances of those kinds of stochastic jump hybrid systems in terms of solution of linear matrix inequalities (LMIs). Based on the derived analysis conditions, the filtering and control problems are addressed. The book presents up-to-date research developments and novel methodologies on stochastic jump hybrid systems. The contents can be divided into two parts: the first part is focused on robust filter design problem, while the second part is put the emphasis on robust control problem. These methodologies provide a framework for stability and performance analy...
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
International Nuclear Information System (INIS)
Inoue, Jun-ichi
2010-01-01
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Petersen, Øyvind Wiig
2014-01-01
Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...
International Nuclear Information System (INIS)
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
DEFF Research Database (Denmark)
Ghoreishi, Maryam
2018-01-01
Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Directory of Open Access Journals (Sweden)
Guoxi Shi
Full Text Available Both deterministic and stochastic processes are expected to drive the assemblages of arbuscular mycorrhizal (AM fungi, but little is known about the relative importance of these processes during the spreading of toxic plants. Here, the species composition and phylogenetic structure of AM fungal communities colonizing the roots of a toxic plant, Ligularia virgaurea, and its neighborhood plants, were analyzed in patches with different individual densities of L. virgaurea (represents the spreading degree. Community compositions of AM fungi in both root systems were changed significantly by the L. virgaurea spreading, and also these communities fitted the neutral model very well. AM fungal communities in patches with absence and presence of L. virgaurea were phylogenetically random and clustered, respectively, suggesting that the principal ecological process determining AM fungal assemblage shifted from stochastic process to environmental filtering when this toxic plant was present. Our results indicate that deterministic and stochastic processes together determine the assemblage of AM fungi, but the dominant process would be changed by the spreading of toxic plants, and suggest that the spreading of toxic plants in alpine meadow ecosystems might be involving the mycorrhizal symbionts.
Directory of Open Access Journals (Sweden)
Hosseinali Salemi
2016-04-01
Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.
International Nuclear Information System (INIS)
Loulou, Richard; Labriet, Maryse; Kanudia, Amit
2009-01-01
in the reference scenario. This is particularly observable in the power generation sector and in some end-use sectors. Finally, the article discusses the pros and cons of the stochastic programming treatment of forcing targets, and compares it with the separate simulations of the various deterministic cases.
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
International Nuclear Information System (INIS)
Liang, Thomas K.S.; Chou, Ling-Yao; Zhang, Zhongwei; Hsueh, Hsiang-Yu; Lee, Min
2011-01-01
Highlights: → A new LOCA licensing methodology (DRHM, deterministic-realistic hybrid methodology) was developed. → DRHM involves conservative Appendix K physical models and statistical treatment of plant status uncertainties. → DRHM can generate 50-100 K PCT margin as compared to a traditional Appendix K methodology. - Abstract: It is well recognized that a realistic LOCA analysis with uncertainty quantification can generate greater safety margin as compared with classical conservative LOCA analysis using Appendix K evaluation models. The associated margin can be more than 200 K. To quantify uncertainty in BELOCA analysis, generally there are two kinds of uncertainties required to be identified and quantified, which involve model uncertainties and plant status uncertainties. Particularly, it will take huge effort to systematically quantify individual model uncertainty of a best estimate LOCA code, such as RELAP5 and TRAC. Instead of applying a full ranged BELOCA methodology to cover both model and plant status uncertainties, a deterministic-realistic hybrid methodology (DRHM) was developed to support LOCA licensing analysis. Regarding the DRHM methodology, Appendix K deterministic evaluation models are adopted to ensure model conservatism, while CSAU methodology is applied to quantify the effect of plant status uncertainty on PCT calculation. Generally, DRHM methodology can generate about 80-100 K margin on PCT as compared to Appendix K bounding state LOCA analysis.
Automated Controller Synthesis for non-Deterministic Piecewise-Affine Hybrid Systems
DEFF Research Database (Denmark)
Grunnet, Jacob Deleuran
formations. This thesis uses a hybrid systems model of a satellite formation with possible actuator faults as a motivating example for developing an automated control synthesis method for non-deterministic piecewise-affine hybrid systems (PAHS). The method does not only open an avenue for further research...... in fault tolerant satellite formation control, but can be used to synthesise controllers for a wide range of systems where external events can alter the system dynamics. The synthesis method relies on abstracting the hybrid system into a discrete game, finding a winning strategy for the game meeting...... game and linear optimisation solvers for controller refinement. To illustrate the efficacy of the method a reoccurring satellite formation example including actuator faults has been used. The end result is the application of PAHSCTRL on the example showing synthesis and simulation of a fault tolerant...
Directory of Open Access Journals (Sweden)
MANFREDI, P.
2014-11-01
Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.
On ray stochasticity during lower hybrid current drive in tokamaks
International Nuclear Information System (INIS)
Bizarro, J.P.; Moreau, D.
1992-08-01
A comprehensive and detailed analysis is presented on the importance of toroidally induced ray stochasticity for the modelling of lower hybrid current drive and for the dynamics of the launched power spectrum. A combined ray tracing and Fokker-Planck code is used and the injected lower hybrid power distribution in poloidal angle and in parallel wave index is accurately represented by taking into account the poloidal extent of the antenna ad by efficiently covering the full range of its radiated spectrum. The importance of the balance between the wave damping and the exponential divergence of nearby ray trajectories in determining the shape of the predicted lower hybrid power deposition profiles is emphasized. When a sufficiently large number of rays is used to densely cover the region of the launched power spectrum which is affected by stochastic effects, code predictions are shown to be stable with respect to small changes in initial conditions and plasma parameters and to be consistent with experimental data
Investment timing under hybrid stochastic and local volatility
International Nuclear Information System (INIS)
Kim, Jeong-Hoon; Lee, Min-Ku; Sohn, So Young
2014-01-01
Highlights: • The effects of hybrid stochastic volatility on real option prices are studied. • The stochastic volatility consists of a fast mean-reverting component and a CEV type one. • A fast mean-reverting factor lowers real option prices and investment thresholds. • The increase of elasticity raises real option prices and investment thresholds. • The effects of the addition of a slowly varying factor depend upon the project value. - Abstract: We consider an investment timing problem under a real option model where the instantaneous volatility of the project value is given by a combination of a hidden stochastic process and the project value itself. The stochastic volatility part is given by a function of a fast mean-reverting process as well as a slowly varying process and the local volatility part is a power (the elasticity parameter) of the project value itself. The elasticity parameter controls directly the correlation between the project value and the volatility. Knowing that the project value represents the market price of a real asset in many applications and the value of the elasticity parameter depends on the asset, the elasticity parameter should be treated with caution for investment decision problems. Based on the hybrid structure of volatility, we investigate the simultaneous impact of the elasticity and the stochastic volatility on the real option value as well as the investment threshold
Gutiérrez, M.A.; Borst, R. de
1999-01-01
This study presents some recent results on damage evolution in quasi-brittle materials including stochastic imperfections. The material strength is described as a random field and coupled to the response. The most probable configurations of imperfections leading to failure are sought by means of an
Stochastic background of gravitational waves from hybrid preheating.
García-Bellido, Juan; Figueroa, Daniel G
2007-02-09
The process of reheating the Universe after hybrid inflation is extremely violent. It proceeds through the nucleation and subsequent collision of large concentrations of energy density in bubblelike structures, which generate a significant fraction of energy in the form of gravitational waves. We study the power spectrum of the stochastic background of gravitational waves produced at reheating after hybrid inflation. We find that the amplitude could be significant for high-scale models, although the typical frequencies are well beyond what could be reached by planned gravitational wave observatories. On the other hand, low-scale models could still produce a detectable stochastic background at frequencies accessible to those detectors. The discovery of such a background would open a new window into the very early Universe.
Energy Technology Data Exchange (ETDEWEB)
Alves, A.S.M., E-mail: asergi@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Melo, P.F. Frutuoso e, E-mail: frutuoso@nuclear.ufrj.br [Graduate Program of Nuclear Engineering, COPPE, Federal University of Rio de Janeiro, Av. Horácio Macedo 2030, Bloco G, sala 206, 21941-914 Rio de Janeiro, RJ (Brazil); Passos, E.M., E-mail: epassos@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Fontes, G.S., E-mail: gsfontes@hotmail.com [Instituto Militar de Engenharia – IME, Praça General Tibúrcio 80, 22290-270 Rio de Janeiro, RJ (Brazil)
2015-06-15
Highlights: • The water infiltration scenario is evaluated for a near surface repository. • The main objective is the determination of the critical distance of the repository. • The column liquid height in the repository is governed by an Ito stochastic equation. • Practical results are obtained for the Abadia de Goiás repository in Brazil. - Abstract: The aim of this paper is to present the stochastic and deterministic models developed for the evaluation of the critical distance of a near surface repository for the disposal of intermediate (ILW) and low level (LLW) radioactive wastes. The critical distance of a repository is defined as the distance between the repository and a well in which the water activity concentration is able to cause a radiological dose to a member of the public equal to the dose limit set by the regulatory body. The mathematical models are developed based on the Richards equation for the liquid flow in the porous media and on the solute transport equation in this medium. The release of radioactive material from the repository to the environment is considered through its base and its flow is determined by Darcy's Law. The deterministic model is obtained from the stochastic approach by neglecting the influence of the Gaussian white noise on the rainfall and the equations are solved analytically with the help of conventional calculus (non-stochastic calculus). The equations of the stochastic model are solved analytically based on the Ito stochastic calculus and numerically by using the Euler–Maruyama method. The impact on the value of the critical distance of the Abadia de Goiás repository is analyzed, taken as a study case, when the deterministic methodology is replaced by the stochastic one, considered more appropriate for modeling rainfall as a stochastic process.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Energy Technology Data Exchange (ETDEWEB)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
Directory of Open Access Journals (Sweden)
Hongyuan Qiu
2016-01-01
Full Text Available Using a finite element model, this paper investigates the torsional vibration of a drill string under combined deterministic excitation and random excitation. The random excitation is caused by the random friction coefficients between the drill bit and the bottom of the hole and assumed as white noise. Simulation shows that the responses under random excitation become random too, and the probabilistic distribution of the responses at each discretized time instant is obtained. The two points, entering and leaving the stick stage, are examined with special attention. The results indicate that the two points become random under random excitation, and the distributions are not normal even when the excitation is assumed as Gaussian white noise.
Directory of Open Access Journals (Sweden)
A. Campanile
2018-01-01
Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.
Developments based on stochastic and determinist methods for studying complex nuclear systems
International Nuclear Information System (INIS)
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Deterministic and stochastic transport theories for the analysis of complex nuclear systems
International Nuclear Information System (INIS)
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays an important role. Neutronic design, operation and evaluation calculations of nuclear systems make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very sensitive to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Sensitivity analysis of the titan hybrid deterministic transport code for SPECT simulation
International Nuclear Information System (INIS)
Royston, Katherine K.; Haghighat, Alireza
2011-01-01
Single photon emission computed tomography (SPECT) has been traditionally simulated using Monte Carlo methods. The TITAN code is a hybrid deterministic transport code that has recently been applied to the simulation of a SPECT myocardial perfusion study. For modeling SPECT, the TITAN code uses a discrete ordinates method in the phantom region and a combined simplified ray-tracing algorithm with a fictitious angular quadrature technique to simulate the collimator and generate projection images. In this paper, we compare the results of an experiment with a physical phantom with predictions from the MCNP5 and TITAN codes. While the results of the two codes are in good agreement, they differ from the experimental data by ∼ 21%. In order to understand these large differences, we conduct a sensitivity study by examining the effect of different parameters including heart size, collimator position, collimator simulation parameter, and number of energy groups. (author)
Ou, Bao-Quan; Liu, Chang; Sun, Yuan; Chen, Ping-Xing
2018-02-01
Inspired by the recent developments of the research on the atom-photon quantum interface and energy-time entanglement between single-photon pulses, we are motivated to study the deterministic protocol for the frequency-bin entanglement of the atom-photon hybrid system, which is analogous to the frequency-bin entanglement between single-photon pulses. We show that such entanglement arises naturally in considering the interaction between a frequency-bin entangled single-photon pulse pair and a single atom coupled to an optical cavity, via straightforward atom-photon phase gate operations. Its anticipated properties and preliminary examples of its potential application in quantum networking are also demonstrated. Moreover, we construct a specific quantum entanglement witness tool to detect such extended frequency-bin entanglement from a reasonably general set of separable states, and prove its capability theoretically. We focus on the energy-time considerations throughout the analysis.
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2014-01-01
Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000
Control of deterministic and stochastic systems with several small parameters - A survey
Directory of Open Access Journals (Sweden)
Vasile Dragan
2009-07-01
Full Text Available The past three decades of research on multiparametric singularly perturbed systems are reviewed, including recent results. Particular attention is paid to stability analysis, control, filtering problems and dynamic games. First, a parameter-independent design methodology is summarized, which employs a two-time-scale and descriptor system approach without information on the small parameters. Further, variational computational algorithms are included to avoid ill-conditioned systems : the exact slow-fast decomposition method, the recursive algorithm and Newton's method are considered in particular. Convergence results are presented and the existence and uniqueness of the solutions are discussed. Second, the new results obtained via the stochastic approach are presented. Finally, the results of a simulation of a practical power system are presented to validate the efficiency of the considered design methods.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Ehrhardt, Matthias J.
2017-08-24
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Energy Technology Data Exchange (ETDEWEB)
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Bayesian inference for hybrid discrete-continuous stochastic kinetic models
International Nuclear Information System (INIS)
Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S
2014-01-01
We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)
Alijani, Azadeh Khajeh; Richardson, Magnus J E
2011-07-01
The response of a neuronal population to afferent drive can be expected to be sensitive to both the distribution and dynamics of membrane voltages within the population. Voltage fluctuations can be driven by synaptic noise, neuromodulators, or cellular inhomogeneities: processes ranging from millisecond autocorrelation times to effectively static or "frozen" noise. Here we extend previous studies of filtered fluctuations to the experimentally verified exponential integrate-and-fire model. How fast or frozen fluctuations affect the steady-state rate and firing-rate response are both examined using perturbative solutions and limits of a 1 + 2 dimensional Fokker-Planck equation. The central finding is that, under conditions of a more-or-less constant population voltage variance, the firing-rate response is only weakly dependent on the fluctuation filter constant: The voltage distribution is the principal determinant of the population response. This result is unexpected given the nature of the systems underlying the extreme limits of fast and frozen fluctuations; the first limit represents a homogeneous population of neurons firing stochastically, whereas the second limit is equivalent to a heterogeneous population of neurons firing deterministically.
International Nuclear Information System (INIS)
Maheri, Alireza
2014-01-01
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
Energy Technology Data Exchange (ETDEWEB)
Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)
2014-07-01
A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)
An Innovative Real-time Environment for Unified Deterministic and Stochastic Groundwater Modeling
Li, S.; Liu, Q.
2003-12-01
Despite an exponential growth of computational capability over the last two decades-one that has allowed computational science and engineering to become a unique, powerful tool for scientific discovery-the extreme cost of groundwater modeling continues to limit its use. This occurs primarily because the modeling paradigm that has been employed for decades limits our ability to take full advantage of recent developments in computer, communication, graphic, and visualization technologies. In this presentation we introduce an innovative and sophisticated computational environment for groundwater modeling that promises to eliminate the current bottleneck and greatly expand the utility of computational tools for scientific discovery related to groundwater. Based on a set of efficient and robust computational algorithms, the new software system, called Interactive Groundwater (IGW), allows simulating complex flow and transport in aquifers subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. Adopting a new paradigm, IGW eliminates a major bottleneck inherent in the traditional fragmented modeling technologies and enables real-time modeling, real-time visualization, real-time analysis, and real-time presentation. IGW functions as a "numerical laboratory" in which a researcher can freely explore in real-time: creating visually an aquifer of desired configurations, interactively imposing desired stresses, and then immediately investigating and visualizing the geology and the processes of flow and contaminant transport and transformation. A modeler can pause to edit at any time and interact on-line with any aspects (e.g., conceptual and numerical representation, boundary conditions, model solvers, and ways of visualization and analysis) of the integrated modeling process; he/she can initiate or stop, whenever needed, particle tracking, plume modeling, subscale modeling, cross-sectional modeling, stochastic modeling, monitoring
Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan
2015-01-01
Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-01
even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.
2009-04-01
data for the multi-year simulation of SVAT processes. In order to test the elaborated methods, a sub-area of the full domain has been designated as a pilot area for this study. Considering our aims, major achievements with respect to the objectives have been accomplished for the pilot area within the scope of this work includes: - Harmonized 3D grid model to describe hydraulic properties of the unsaturated zone has been created (Pásztor, L. et al 2002 and 2005, Kuti, L. 2007); - The spatially distributed physically based distributed parameter SVAT model DIWA (DIstributed WAtershed) (Szabó, J.A., 2007) has been adapted; - The stochastic characteristics and parameters of the weather generator has been derived from measured data series; - Coupling the stochastic weather generator with the deterministic DIWA SVAT-type model also has been done. In this paper, the results of the coupled (deterministic - stochastic) model simulation based analysis of regional drought frequency and duration for a sub-area of the full domain of the Great Hungarian Plain will be reported. First the harmonized 3D grid model of the hydraulic properties of the unsaturated zone will be presented. Then a brief characterisation of the DIWA model will be given. The Markov chain based stochastic weather generator also will be presented. Finally, the results of multi-year drought frequency and duration analysis at the pilot area and conclusions will be discussed. Keywords: Drought frequency and duration analysis; multivariate analyses; recurrence analyses; extreme events; stochastic weather generator; spatially distributed SVAT model; 3D grid model of hydraulic properties of the unsaturated zone. References: Kuti, L. (2007): Agrogeological investigation of soil fertility limiting factors int he soil-parent roc-groundwater system in Hungary. In: Environment & Progress, Cluj-Napoca, nr. 10. pp. 131-145. Pásztor, L. - Szabó, J. - Bakacsi, Zs. (2002): GIS processing of large scale soil maps in Hungary
Stochastic linear hybrid systems: Modeling, estimation, and application
Seah, Chze Eng
Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Comparison of TITAN hybrid deterministic transport code and MCNP5 for simulation of SPECT
International Nuclear Information System (INIS)
Royston, K.; Haghighat, A.; Yi, C.
2010-01-01
Traditionally, Single Photon Emission Computed Tomography (SPECT) simulations use Monte Carlo methods. The hybrid deterministic transport code TITAN has recently been applied to the simulation of a SPECT myocardial perfusion study. The TITAN SPECT simulation uses the discrete ordinates formulation in the phantom region and a simplified ray-tracing formulation outside of the phantom. A SPECT model has been created in the Monte Carlo Neutral particle (MCNP)5 Monte Carlo code for comparison. In MCNP5 the collimator is directly modeled, but TITAN instead simulates the effect of collimator blur using a circular ordinate splitting technique. Projection images created using the TITAN code are compared to results using MCNP5 for three collimator acceptance angles. Normalized projection images for 2.97 deg, 1.42 deg and 0.98 deg collimator acceptance angles had maximum relative differences of 21.3%, 11.9% and 8.3%, respectively. Visually the images are in good agreement. Profiles through the projection images were plotted to find that the TITAN results followed the shape of the MCNP5 results with some differences in magnitude. A timing comparison on 16 processors found that the TITAN code completed the calculation 382 to 2787 times faster than MCNP5. Both codes exhibit good parallel performance. (author)
International Nuclear Information System (INIS)
Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2017-01-01
In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.
Directory of Open Access Journals (Sweden)
YouHua Chen
2014-06-01
Full Text Available In the present report, the coexistence of Prisoners' Dilemma game players (cooperators and defectors were explored in an individual-based framework with the consideration of the impacts of deterministic and stochastic waiting time (WT for triggering mortality and/or colonization events. For the type of deterministic waiting time, the time step for triggering a mortality and/or colonization event is fixed. For the type of stochastic waiting time, whether a mortality and/or colonization event should be triggered for each time step of a simulation is randomly determined by a given acceptance probability (the event takes place when a variate drawn from a uniform distribution [0,1] is smaller than the acceptance probability. The two strategies of modeling waiting time are considered simultaneously and applied to both quantities (mortality: WTm, colonization: WTc. As such, when WT (WTm and/or WTc is an integral >=1, it indicated a deterministically triggering strategy. In contrast, when 1>WT>0, it indicated a stochastically triggering strategy and the WT value itself is used as the acceptance probability. The parameter space between the waiting time for mortality (WTm-[0.1,40] and colonization (WTc-[0.1,40] was traversed to explore the coexistence and non-coexistence regions. The role of defense award was evaluated. My results showed that, one non-coexistence region is identified consistently, located at the area where 1>=WTm>=0.3 and 40>=WTc>=0.1. As a consequence, it was found that the coexistence of cooperators and defectors in the community is largely dependent on the waiting time of mortality events, regardless of the defense or cooperation rewards. When the mortality events happen in terms of stochastic waiting time (1>=WTm>=0.3, extinction of either cooperators or defectors or both could be very likely, leading to the emergence of non-coexistence scenarios. However, when the mortality events occur in forms of relatively long deterministic
Naseri Kouzehgarani, Asal
2009-12-01
as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.
Energy Technology Data Exchange (ETDEWEB)
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Energy Technology Data Exchange (ETDEWEB)
Giffard, F X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.
2014-01-01
Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer
Robustness Analysis of Hybrid Stochastic Neural Networks with Neutral Terms and Time-Varying Delays
Directory of Open Access Journals (Sweden)
Chunmei Wu
2015-01-01
Full Text Available We analyze the robustness of global exponential stability of hybrid stochastic neural networks subject to neutral terms and time-varying delays simultaneously. Given globally exponentially stable hybrid stochastic neural networks, we characterize the upper bounds of contraction coefficients of neutral terms and time-varying delays by using the transcendental equation. Moreover, we prove theoretically that, for any globally exponentially stable hybrid stochastic neural networks, if additive neutral terms and time-varying delays are smaller than the upper bounds arrived, then the perturbed neural networks are guaranteed to also be globally exponentially stable. Finally, a numerical simulation example is given to illustrate the presented criteria.
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-07
Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Stochastic Optimal Control of Parallel Hybrid Electric Vehicles
Directory of Open Access Journals (Sweden)
Feiyan Qin
2017-02-01
Full Text Available Energy management strategies (EMSs in hybrid electric vehicles (HEVs are highly related to the fuel economy and emission performances. However, EMS constitutes a challenging problem due to the complex structure of a HEV and the unknown or partially known driving cycles. To meet this problem, this paper adopts a stochastic dynamic programming (SDP method for the EMS of a specially designed vehicle, a pre-transmission single-shaft torque-coupling parallel HEV. In this parallel HEV, the auto clutch output is connected to the transmission input through an electric motor, which benefits an efficient motor assist operation. In this EMS, demanded torque of driver is modeled as a one-state Markov process to represent the uncertainty of future driving situations. The obtained EMS has been evaluated with ADVISOR2002 over two standard government drive cycles and a self-defined one, and compared with a dynamic programming (DP one and a rule-based one. Simulation results have shown the real-time performance of the proposed approach, and potential vehicle performance improvement relative to the rule-based one.
Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses
International Nuclear Information System (INIS)
Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan
2014-01-01
Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit
DEFF Research Database (Denmark)
Nielsen, Steen
2000-01-01
This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....
Directory of Open Access Journals (Sweden)
Morillon B.
2013-03-01
Full Text Available JEFF-3.1.1 is the reference nuclear data library in CEA for the design calculations of the next nuclear power plants. The validation of the new neutronics code systems is based on this library and changes in nuclear data should be looked at closely. Some new actinides evaluation files at high energies have been proposed by CEA/Bruyères-le-Chatel in 2009 and have been integrated in JEFF3.2T1 test release. For the new release JEFF-3.2, CEA will build new evaluation files for the actinides, which should be a combination of the new evaluated data coming from BRC-2009 in the high energy range and improvements or new evaluations in the resolved and unresolved resonance range from CEA-Cadarache. To prepare the building of these new files, benchmarking the BRC-2009 library in comparison with the JEFF-3.1.1 library was very important. The crucial points to evaluate were the improvements in the continuum range and the discrepancies in the resonance range. The present work presents for a selected set of benchmarks the discrepancies in the effective multiplication factor obtained while using the JEFF-3.1.1 or JEFF-3.2T1 library with the deterministic code package ERANOS/PARIS and the stochastic code TRIPOLI-4. They have both been used to calculate cross section perturbations or other nuclear data perturbations when possible. This has permittted to identify the origin of the discrepancies in reactivity calculations. In addition, this work also shows the importance of cross section processing validation. Actually, some fast neutron spectrum calculations have led to opposite tendancies between the deterministic code package and the stochastic code. Some particular nuclear data (MT=5 in ENDF terminology seem to be incompatible with the current MERGE or GECCO processing codes.
Directory of Open Access Journals (Sweden)
Youness El Ansari
2017-01-01
Full Text Available We investigate the various conditions that control the extinction and stability of a nonlinear mathematical spread model with stochastic perturbations. This model describes the spread of viruses into an infected computer network which is powered by a system of antivirus software. The system is analyzed by using the stability theory of stochastic differential equations and the computer simulations. First, we study the global stability of the virus-free equilibrium state and the virus-epidemic equilibrium state. Furthermore, we use the Itô formula and some other theoretical theorems of stochastic differential equation to discuss the extinction and the stationary distribution of our system. The analysis gives a sufficient condition for the infection to be extinct (i.e., the number of viruses tends exponentially to zero. The ergodicity of the solution and the stationary distribution can be obtained if the basic reproduction number Rp is bigger than 1, and the intensities of stochastic fluctuations are small enough. Numerical simulations are carried out to illustrate the theoretical results.
Cai, Kaiming; Yang, Meiyin; Ju, Hailang; Wang, Sumei; Ji, Yang; Li, Baohe; Edmonds, Kevin William; Sheng, Yu; Zhang, Bao; Zhang, Nan; Liu, Shuai; Zheng, Houzhi; Wang, Kaiyou
2017-07-01
All-electrical and programmable manipulations of ferromagnetic bits are highly pursued for the aim of high integration and low energy consumption in modern information technology. Methods based on the spin-orbit torque switching in heavy metal/ferromagnet structures have been proposed with magnetic field, and are heading toward deterministic switching without external magnetic field. Here we demonstrate that an in-plane effective magnetic field can be induced by an electric field without breaking the symmetry of the structure of the thin film, and realize the deterministic magnetization switching in a hybrid ferromagnetic/ferroelectric structure with Pt/Co/Ni/Co/Pt layers on PMN-PT substrate. The effective magnetic field can be reversed by changing the direction of the applied electric field on the PMN-PT substrate, which fully replaces the controllability function of the external magnetic field. The electric field is found to generate an additional spin-orbit torque on the CoNiCo magnets, which is confirmed by macrospin calculations and micromagnetic simulations.
González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.
2017-12-01
Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The
International Nuclear Information System (INIS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-01-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Energy Technology Data Exchange (ETDEWEB)
Marchetti, Luca, E-mail: marchetti@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy); University of Trento, Department of Mathematics (Italy); Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research – University of Trento Centre for Computational and Systems Biology (COSBI), Piazza Manifattura, 1, 38068 Rovereto (Italy)
2016-07-15
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Directory of Open Access Journals (Sweden)
S. Aberkane
2007-01-01
Full Text Available This paper deals with dynamic output feedback control of continuous-time active fault tolerant control systems with Markovian parameters (AFTCSMP and state-dependent noise. The main contribution is to formulate conditions for multiperformance design, related to this class of stochastic hybrid systems, that take into account the problematic resulting from the fact that the controller only depends on the fault detection and isolation (FDI process. The specifications and objectives under consideration include stochastic stability, ℋ2 and ℋ∞ (or more generally, stochastic integral quadratic constraints performances. Results are formulated as matrix inequalities. The theoretical results are illustrated using a classical example from literature.
Hybrid Deterministic Views about Genes in Biology Textbooks: A Key Problem in Genetics Teaching
dos Santos, Vanessa Carvalho; Joaquim, Leyla Mariane; El-Hani, Charbel Nino
2012-01-01
A major source of difficulties in promoting students' understanding of genetics lies in the presentation of gene concepts and models in an inconsistent and largely ahistorical manner, merely amalgamated in hybrid views, as if they constituted linear developments, instead of being built for different purposes and employed in specific contexts. In…
Directory of Open Access Journals (Sweden)
Nicholas Mansel Wilkinson
2014-02-01
Full Text Available Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modelled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of the oculomotor postural control. We identify signatures reminiscent of a certain flavour of transient neurodynamics; toric travelling waves which rotate around a central phase singularity. Spiral waves play an organisational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales.
International Nuclear Information System (INIS)
Biondo, Elliott D.; Wilson, Paul P. H.
2017-01-01
In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation of an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 _± 5 • _1_0_"_4 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
International Nuclear Information System (INIS)
Rouchon, Amelie
2016-01-01
Neutron noise analysis addresses the description of small time-dependent flux fluctuations induced by small global or local perturbations of the macroscopic cross-sections. These fluctuations may occur in nuclear reactors due to density fluctuations of the coolant, to vibrations of fuel elements, control rods, or any other structures in the core. In power reactors, ex-core and in-core detectors can be used to monitor neutron noise with the aim of detecting possible anomalies and taking the necessary measures for continuous safe power production. The objective of this thesis is to develop techniques for neutron noise analysis and especially to implement a neutron noise solver in the deterministic transport code APOLLO3 developed at CEA. A new Monte Carlo algorithm that solves the transport equations for the neutron noise has been also developed. In addition, a new vibration model has been developed. Moreover, a method based on the determination of a new steady state has been proposed for the linear and the nonlinear full theory so as to improve the traditional neutron noise theory. In order to test these new developments we have performed neutron noise simulations in one-dimensional systems and in a large pressurized water reactor with heavy baffle in two and three dimensions with APOLLO3 in diffusion and transport theories. (author) [fr
Hybrid approaches for multiple-species stochastic reaction-diffusion models
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian
2015-10-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
Hybrid approaches for multiple-species stochastic reaction-diffusion models.
Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen
2015-01-01
Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.
A Class of Stochastic Hybrid Systems with State-Dependent Switching Noise
DEFF Research Database (Denmark)
Leth, John-Josef; Rasmussen, Jakob Gulddahl; Schiøler, Henrik
2012-01-01
In this paper, we develop theoretical results based on a proposed method for modeling switching noise for a class of hybrid systems with piecewise linear partitioned state space, and state-depending switching. We devise a stochastic model of such systems, whose global dynamics is governed...
International Nuclear Information System (INIS)
Wan Li; Zhou Qinghua
2007-01-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem
Wan, Li; Zhou, Qinghua
2007-10-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem.
Trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus
International Nuclear Information System (INIS)
Du, Yongchang; Zhao, Yue; Wang, Qinpu; Zhang, Yuanbo; Xia, Huaicheng
2016-01-01
A trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus is presented in this paper, which includes the offline stochastic dynamic programming part and the online implementation part performed by equivalent consumption minimization strategy. In the offline part, historical driving cycles of the fixed route are divided into segments according to the position of bus stops, and then a segment-based stochastic driving condition model based on Markov chain is built. With the segment-based stochastic model obtained, the control set for real-time implemented equivalent consumption minimization strategy can be achieved by solving the offline stochastic dynamic programming problem. Results of stochastic dynamic programming are converted into a 3-dimensional lookup table of parameters for online implemented equivalent consumption minimization strategy. The proposed strategy is verified by both simulation and hardware-in-loop test of real-world driving cycle on an urban bus route. Simulation results show that the proposed method outperforms both the well-tuned equivalent consumption minimization strategy and the rule-based strategy in terms of fuel economy, and even proved to be close to the optimal result obtained by dynamic programming. Furthermore, the practical application potential of the proposed control method was proved by hardware-in-loop test. - Highlights: • A stochastic problem was formed based on a stochastic segment-based driving condition model. • Offline stochastic dynamic programming was employed to solve the stochastic problem. • The instant power split decision was made by the online equivalent consumption minimization strategy. • Good performance in fuel economy of the proposed method was verified by simulation results. • Practical application potential of the proposed method was verified by the hardware-in-loop test results.
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
International Nuclear Information System (INIS)
Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)
International Nuclear Information System (INIS)
Safwat, Akmal; Bentzen, Soeren M.; Turesson, Ingela; Hendry, Jolyon H.
2002-01-01
Background: The large patient-to-patient variability in the grade of normal tissue injury after a standard course of radiotherapy is well established clinically. A better understanding of this individual variation may provide valuable insights into the pathogenesis of radiation damage and the prospects of predicting the outcome. Purpose: To estimate the relative importance of the stochastic vs. patient-related components of variability in the expression of radiation-induced normal tissue damage. Methods and Materials: The study data were selected from the dose fractionation studies of Turesson in Gothenburg. Patients treated with bilateral internal mammary fields, who completed at least 10 years of follow-up, were included. The material included 22 different fractionation schedules (11 on each side). Telangiectasia was graded on an arbitrary 6-point scale using clinical photographs of the irradiated fields. For each field, in each patient, a curve showing the grade of telangiectasia as a function of time was constructed. A measure of radioresponsiveness was obtained from the difference between the area under the curve (AUC) for a specific field in an individual patient minus the mean AUC of fields receiving the same dose fractionation schedule. As a confirmatory procedure, the same analysis was repeated with a weighted area under the curve (WAUC) approach, in which the time spent at or above each of the 5 nonzero grades was calculated for each field in each patient. These times were used as explanatory variables in a linear regression analysis of biological equivalent dose to establish statistically the weight of each grade providing the optimal relationship between dose and effect. Using these regression coefficients, the weighted area under the grade-time curve (WAUC) was estimated. Results: The AUC was significantly correlated with the isoeffective dose in 2-Gy fractions (ID2). An analysis of variance components, using the maximum likelihood method, showed that
ℋ∞ constant gain state feedback stabilization of stochastic hybrid systems with Wiener process
Directory of Open Access Journals (Sweden)
E. K. Boukas
2004-01-01
Full Text Available This paper considers the stabilization problem of the class of continuous-time linear stochastic hybrid systems with Wiener process. The ℋ∞ state feedback stabilization problem is treated. A state feedback controller with constant gain that does not require access to the system mode is designed. LMI-based conditions are developed to design the state feedback controller with constant gain that stochastically stabilizes the studied class of systems and, at the same time, achieve the disturbance rejection of a desired level. The minimum disturbance rejection is also determined. Numerical examples are given to show the usefulness of the proposed results.
Driving-behavior-aware stochastic model predictive control for plug-in hybrid electric buses
International Nuclear Information System (INIS)
Li, Liang; You, Sixiong; Yang, Chao; Yan, Bingjie; Song, Jian; Chen, Zheng
2016-01-01
Highlights: • The novel approximated global optimal energy management strategy has been proposed for hybrid powertrains. • Eight typical driving behaviors have been classified with K-means to deal with the multiplicative traffic conditions. • The stochastic driver models of different driving behaviors were established based on the Markov chains. • ECMS was used to modify the SMPC-based energy management strategy to improve its fuel economy. • The approximated global optimal energy management strategy for plug-in hybrid electric buses has been verified and analyzed. - Abstract: Driving cycles of a city bus is statistically characterized by some repetitive features, which makes the predictive energy management strategy very desirable to obtain approximate optimal fuel economy of a plug-in hybrid electric bus. But dealing with the complicated traffic conditions and finding an approximated global optimal strategy which is applicable to the plug-in hybrid electric bus still remains a challenging technique. To solve this problem, a novel driving-behavior-aware modified stochastic model predictive control method is proposed for the plug-in hybrid electric bus. Firstly, the K-means is employed to classify driving behaviors, and the driver models based on Markov chains is obtained under different kinds of driving behaviors. While the obtained driver behaviors are regarded as stochastic disturbance inputs, the local minimum fuel consumption might be obtained with a traditional stochastic model predictive control at each step, taking tracking the reference battery state of charge trajectory into consideration in the finite predictive horizons. However, this technique is still accompanied by some working points with reduced/worsened fuel economy. Thus, the stochastic model predictive control is modified with the equivalent consumption minimization strategy to eliminate these undesirable working points. The results in real-world city bus routines show that the
International Nuclear Information System (INIS)
Malescio, G.
1981-04-01
The two-dimensional Fokker-Planck equation describing the ion motion in a coherent lower hybrid wave above the stochasticity threshold is analytically solved. An expression is given for the steady state power dissipation
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
Larkin, Steven P.; Levander, Alan; Okaya, David; Goff, John A.
1996-12-01
As a high resolution addition to the 1992 Pacific to Arizona Crustal Experiment (PACE), a 45-km-long deep crustal seismic reflection profile was acquired across the Chocolate Mountains in southeastern California to illuminate crustal structure in the transition between the Salton Trough and the Basin and Range province. The complex seismic data are analyzed for both large-scale (deterministic) and fine-scale (stochastic) crustal features. A low-fold near-offset common-midpoint (CMP) stacked section shows the northeastward lateral extent of a high-velocity lower crustal body which is centered beneath the Salton Trough. Off-end shots record a high-amplitude diffraction from the point where the high velocity lower crust pinches out at the Moho. Above the high-velocity lower crust, moderate-amplitude reflections occur at midcrustal levels. These reflections display the coherency and frequency characteristics of reflections backscattered from a heterogeneous velocity field, which we model as horizontal intrusions with a von Kármán (fractal) distribution. The effects of upper crustal scattering are included by combining the mapped surface geology and laboratory measurements of exposed rocks within the Chocolate Mountains to reproduce the upper crustal velocity heterogeneity in our crustal velocity model. Viscoelastic finite difference simulations indicate that the volume of mafic material within the reflective zone necessary to produce the observed backscatter is about 5%. The presence of wavelength-scale heterogeneity within the near-surface, upper, and middle crust also produces a 0.5-s-thick zone of discontinuous reflections from a crust-mantle interface which is actually a first-order discontinuity.
Risk-sensitive control of stochastic hybrid systems on infinite time horizon
Directory of Open Access Journals (Sweden)
Runolfsson Thordur
1999-01-01
Full Text Available A risk-sensitive optimal control problem is considered for a hybrid system that consists of continuous time diffusion process that depends on a discrete valued mode variable that is modeled as a Markov chain. Optimality conditions are presented and conditions for the existence of optimal controls are derived. It is shown that the optimal risk-sensitive control problem is equivalent to the upper value of an associated stochastic differential game, and insight into the contributions of the noise input and mode variable to the risk sensitivity of the cost functional is given. Furthermore, it is shown that due to the mode variable risk sensitivity, the equivalence relationship that has been observed between risk-sensitive and H ∞ control in the nonhybrid case does not hold for stochastic hybrid systems.
Measurability and Safety Verification for Stochastic Hybrid Systems
DEFF Research Database (Denmark)
Fränzle, Martin; Hahn, Ernst Moritz; Hermanns, Holger
2011-01-01
method that establishes safe upper bounds on reachability probabilities. To arrive there requires us to solve semantic intricacies as well as practical problems. In particular, we show that measurability of a complete system follows from the measurability of its constituent parts. On the practical side......-time behaviour is given by differential equations, as for usual hybrid systems, but the targets of discrete jumps are chosen by probability distributions. These distributions may be general measures on state sets. Also non-determinism is supported, and the latter is exploited in an abstraction and evaluation...
An efficient algorithm for the stochastic simulation of the hybridization of DNA to microarrays
Directory of Open Access Journals (Sweden)
Laurenzi Ian J
2009-12-01
Full Text Available Abstract Background Although oligonucleotide microarray technology is ubiquitous in genomic research, reproducibility and standardization of expression measurements still concern many researchers. Cross-hybridization between microarray probes and non-target ssDNA has been implicated as a primary factor in sensitivity and selectivity loss. Since hybridization is a chemical process, it may be modeled at a population-level using a combination of material balance equations and thermodynamics. However, the hybridization reaction network may be exceptionally large for commercial arrays, which often possess at least one reporter per transcript. Quantification of the kinetics and equilibrium of exceptionally large chemical systems of this type is numerically infeasible with customary approaches. Results In this paper, we present a robust and computationally efficient algorithm for the simulation of hybridization processes underlying microarray assays. Our method may be utilized to identify the extent to which nucleic acid targets (e.g. cDNA will cross-hybridize with probes, and by extension, characterize probe robustnessusing the information specified by MAGE-TAB. Using this algorithm, we characterize cross-hybridization in a modified commercial microarray assay. Conclusions By integrating stochastic simulation with thermodynamic prediction tools for DNA hybridization, one may robustly and rapidly characterize of the selectivity of a proposed microarray design at the probe and "system" levels. Our code is available at http://www.laurenzi.net.
Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H
2015-02-01
The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.
Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle
2012-12-01
Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that
A Stochastic Hybrid Systems framework for analysis of Markov reward models
International Nuclear Information System (INIS)
Dhople, S.V.; DeVille, L.; Domínguez-García, A.D.
2014-01-01
In this paper, we propose a framework to analyze Markov reward models, which are commonly used in system performability analysis. The framework builds on a set of analytical tools developed for a class of stochastic processes referred to as Stochastic Hybrid Systems (SHS). The state space of an SHS is comprised of: (i) a discrete state that describes the possible configurations/modes that a system can adopt, which includes the nominal (non-faulty) operational mode, but also those operational modes that arise due to component faults, and (ii) a continuous state that describes the reward. Discrete state transitions are stochastic, and governed by transition rates that are (in general) a function of time and the value of the continuous state. The evolution of the continuous state is described by a stochastic differential equation and reward measures are defined as functions of the continuous state. Additionally, each transition is associated with a reset map that defines the mapping between the pre- and post-transition values of the discrete and continuous states; these mappings enable the definition of impulses and losses in the reward. The proposed SHS-based framework unifies the analysis of a variety of previously studied reward models. We illustrate the application of the framework to performability analysis via analytical and numerical examples
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Empirical Analysis of Stochastic Volatility Model by Hybrid Monte Carlo Algorithm
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2013-01-01
The stochastic volatility model is one of volatility models which infer latent volatility of asset returns. The Bayesian inference of the stochastic volatility (SV) model is performed by the hybrid Monte Carlo (HMC) algorithm which is superior to other Markov Chain Monte Carlo methods in sampling volatility variables. We perform the HMC simulations of the SV model for two liquid stock returns traded on the Tokyo Stock Exchange and measure the volatilities of those stock returns. Then we calculate the accuracy of the volatility measurement using the realized volatility as a proxy of the true volatility and compare the SV model with the GARCH model which is one of other volatility models. Using the accuracy calculated with the realized volatility we find that empirically the SV model performs better than the GARCH model.
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2014-01-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model
Mai, Paul Martin; Imperatori, W.; Olsen, K. B.
2010-01-01
We present a new approach for computing broadband (0-10 Hz) synthetic seismograms by combining high-frequency (HF) scattering with low-frequency (LF) deterministic seismograms, considering finite-fault earthquake rupture models embedded in 3D earth structure. Site-specific HF-scattering Green's functions for a heterogeneous medium with uniformly distributed random isotropic scatterers are convolved with a source-time function that characterizes the temporal evolution of the rupture process. These scatterograms are then reconciled with the LF-deterministic waveforms using a frequency-domain optimization to match both amplitude and phase spectra around the target intersection frequency. The scattering parameters of the medium, scattering attenuation ηs, intrinsic attenuation ηi, and site-kappa, as well as frequency-dependent attenuation, determine waveform and spectral character of the HF-synthetics and thus affect the hybrid broadband seismograms. Applying our methodology to the 1994 Northridge earthquake and validating against near-field recordings at 24 sites, we find that our technique provides realistic broadband waveforms and consistently reproduces LF ground-motion intensities for two independent source descriptions. The least biased results, compared to recorded strong-motion data, are obtained after applying a frequency-dependent site-amplification factor to the broadband simulations. This innovative hybrid ground-motion simulation approach, applicable to any arbitrarily complex earthquake source model, is well suited for seismic hazard analysis and ground-motion estimation.
Mai, Paul Martin
2010-09-20
We present a new approach for computing broadband (0-10 Hz) synthetic seismograms by combining high-frequency (HF) scattering with low-frequency (LF) deterministic seismograms, considering finite-fault earthquake rupture models embedded in 3D earth structure. Site-specific HF-scattering Green\\'s functions for a heterogeneous medium with uniformly distributed random isotropic scatterers are convolved with a source-time function that characterizes the temporal evolution of the rupture process. These scatterograms are then reconciled with the LF-deterministic waveforms using a frequency-domain optimization to match both amplitude and phase spectra around the target intersection frequency. The scattering parameters of the medium, scattering attenuation ηs, intrinsic attenuation ηi, and site-kappa, as well as frequency-dependent attenuation, determine waveform and spectral character of the HF-synthetics and thus affect the hybrid broadband seismograms. Applying our methodology to the 1994 Northridge earthquake and validating against near-field recordings at 24 sites, we find that our technique provides realistic broadband waveforms and consistently reproduces LF ground-motion intensities for two independent source descriptions. The least biased results, compared to recorded strong-motion data, are obtained after applying a frequency-dependent site-amplification factor to the broadband simulations. This innovative hybrid ground-motion simulation approach, applicable to any arbitrarily complex earthquake source model, is well suited for seismic hazard analysis and ground-motion estimation.
International Nuclear Information System (INIS)
Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-01-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge
Sizing for fuel cell/supercapacitor hybrid vehicles based on stochastic driving cycles
International Nuclear Information System (INIS)
Feroldi, Diego; Carignano, Mauro
2016-01-01
Highlights: • A sizing procedure based on the fulfilment of real driving conditions is proposed. • A methodology to generate long-term stochastic driving cycles is proposed. • A parametric optimization of the real-time EMS is conducted. • A trade-off design is adopted from a Pareto front. • A comparison with optimal consumption via Dynamic Programming is performed. - Abstract: In this article, a methodology for the sizing and analysis of fuel cell/supercapacitor hybrid vehicles is presented. The proposed sizing methodology is based on the fulfilment of power requirements, including sustained speed tests and stochastic driving cycles. The procedure to generate driving cycles is also presented in this paper. The sizing algorithm explicitly accounts for the Equivalent Consumption Minimization Strategy (ECMS). The performance is compared with optimal consumption, which is found using an off-line strategy via Dynamic Programming. The sizing methodology provides guidance for sizing the fuel cell and the supercapacitor number. The results also include analysis on oversizing the fuel cell and varying the parameters of the energy management strategy. The simulation results highlight the importance of integrating sizing and energy management into fuel cell hybrid vehicles.
Kozel, Tomas; Stary, Milos
2017-12-01
The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for
2017-10-04
complex fluids, statistical physics, and information domains. This work was presented in the annual SIAM conferences by the PhD students involved in...1.00 Funding Support: Project Contribution: International Collaboration: International Travel : National Academy Member: N Person Months...Worked: 1.00 Funding Support: Project Contribution: International Collaboration: International Travel : National Academy Member: N
Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling
Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.
2016-11-01
A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of
Selroos, J. O.; Appleyard, P.; Bym, T.; Follin, S.; Hartley, L.; Joyce, S.; Munier, R.
2015-12-01
In 2011 the Swedish Nuclear Fuel and Waste Management Company (SKB) applied for a license to start construction of a final repository for spent nuclear fuel at Forsmark in Northern Uppland, Sweden. The repository is to be built at approximately 500 m depth in crystalline rock. A stochastic, discrete fracture network (DFN) concept was chosen for interpreting the surface-based (incl. boreholes) data, and for assessing the safety of the repository in terms of groundwater flow and flow pathways to and from the repository. Once repository construction starts, also underground data such as tunnel pilot borehole and tunnel trace data will become available. It is deemed crucial that DFN models developed at this stage honors the mapped structures both in terms of location and geometry, and in terms of flow characteristics. The originally fully stochastic models will thus increase determinism towards the repository. Applying the adopted probabilistic framework, predictive modeling to support acceptance criteria for layout and disposal can be performed with the goal of minimizing risks associated with the repository. This presentation describes and illustrates various methodologies that have been developed to condition stochastic realizations of fracture networks around underground openings using borehole and tunnel trace data, as well as using hydraulic measurements of inflows or hydraulic interference tests. The methodologies, implemented in the numerical simulators ConnectFlow and FracMan/MAFIC, are described in some detail, and verification tests and realistic example cases are shown. Specifically, geometric and hydraulic data are obtained from numerical synthetic realities approximating Forsmark conditions, and are used to test the constraining power of the developed methodologies by conditioning unconditional DFN simulations following the same underlying fracture network statistics. Various metrics are developed to assess how well the conditional simulations compare to
A hybrid stochastic approach for self-location of wireless sensors in indoor environments.
Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro
2009-01-01
Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.
A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments
Directory of Open Access Journals (Sweden)
Alejandro Canovas
2009-05-01
Full Text Available Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.
International Nuclear Information System (INIS)
Babykina, Génia; Brînzei, Nicolae; Aubry, Jean-François; Deleuze, Gilles
2016-01-01
The paper proposes a modeling framework to support Monte Carlo simulations of the behavior of a complex industrial system. The aim is to analyze the system dependability in the presence of random events, described by any type of probability distributions. Continuous dynamic evolutions of physical parameters are taken into account by a system of differential equations. Dynamic reliability is chosen as theoretical framework. Based on finite state automata theory, the formal model is built by parallel composition of elementary sub-models using a bottom-up approach. Considerations of a stochastic nature lead to a model called the Stochastic Hybrid Automaton. The Scilab/Scicos open source environment is used for implementation. The case study is carried out on an example of a steam generator of a nuclear power plant. The behavior of the system is studied by exploring its trajectories. Possible system trajectories are analyzed both empirically, using the results of Monte Carlo simulations, and analytically, using the formal system model. The obtained results are show to be relevant. The Stochastic Hybrid Automaton appears to be a suitable tool to address the dynamic reliability problem and to model real systems of high complexity; the bottom-up design provides precision and coherency of the system model. - Highlights: • A part of a nuclear power plant is modeled in the context of dynamic reliability. • Stochastic Hybrid Automaton is used as an input model for Monte Carlo simulations. • The model is formally built using a bottom-up approach. • The behavior of the system is analyzed empirically and analytically. • A formally built SHA shows to be a suitable tool to approach dynamic reliability.
Andritsos, Nikolaos D; Mataragas, Marios; Paramithiotis, Spiros; Drosinos, Eleftherios H
2013-12-01
Listeria monocytogenes poses a serious threat to public health, and the majority of cases of human listeriosis are associated with contaminated food. Reliable microbiological testing is needed for effective pathogen control by food industry and competent authorities. The aims of this work were to estimate the prevalence and concentration of L. monocytogenes in minced pork meat by the application of a Bayesian modeling approach, and also to determine the performance of three culture media commonly used for detecting L. monocytogenes in foods from a deterministic and stochastic perspective. Samples (n = 100) collected from local markets were tested for L. monocytogenes using in parallel the PALCAM, ALOA and RAPID'L.mono selective media according to ISO 11290-1:1996 and 11290-2:1998 methods. Presence of the pathogen was confirmed by conducting biochemical and molecular tests. Independent experiments (n = 10) for model validation purposes were performed. Performance attributes were calculated from the presence-absence microbiological test results by combining the results obtained from the culture media and confirmative tests. Dirichlet distribution, the multivariate expression of a Beta distribution, was used to analyze the performance data from a stochastic perspective. No L. monocytogenes was enumerated by direct-plating (media were best at ruling in L. monocytogenes presence than ruling it out. Sensitivity and specificity varied depending on the culture-dependent method. None of the culture media was perfect in detecting L. monocytogenes in minced pork meat alone. The use of at least two culture media in parallel enhanced the efficiency of L. monocytogenes detection. Bayesian modeling may reduce the time needed to draw conclusions regarding L. monocytogenes presence and the uncertainty of the results obtained. Furthermore, the problem of observing zero counts may be overcome by applying Bayesian analysis, making the determination of a test performance feasible
Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P
2008-01-01
Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...
International Nuclear Information System (INIS)
Jiang, Yibo; Xu, Jian; Sun, Yuanzhang; Wei, Congying; Wang, Jing; Ke, Deping; Li, Xiong; Yang, Jun; Peng, Xiaotao; Tang, Bowen
2017-01-01
Highlights: • Improving the utilization of wind power by the demand response of residential hybrid energy system. • An optimal scheduling of home energy management system integrating micro-CHP. • The scattered response capability of consumers is aggregated by demand bidding curve. • A stochastic day-ahead economic dispatch model considering demand response and wind power. - Abstract: As the installed capacity of wind power is growing, the stochastic variability of wind power leads to the mismatch of demand and generated power. Employing the regulating capability of demand to improve the utilization of wind power has become a new research direction. Meanwhile, the micro combined heat and power (micro-CHP) allows residential consumers to choose whether generating electricity by themselves or purchasing from the utility company, which forms a residential hybrid energy system. However, the impact of the demand response with hybrid energy system contained micro-CHP on the large-scale wind power utilization has not been analyzed quantitatively. This paper proposes an operation optimization model of the residential hybrid energy system based on price response, integrating micro-CHP and smart appliances intelligently. Moreover, a novel load aggregation method is adopted to centralize scattered response capability of residential load. At the power grid level, a day-ahead stochastic economic dispatch model considering demand response and wind power is constructed. Furthermore, simulation is conducted respectively on the modified 6-bus system and IEEE 118-bus system. The results show that with the method proposed, the wind power curtailment of the system decreases by 78% in 6-bus system. In the meantime, the energy costs of residential consumers and the operating costs of the power system reduced by 10.7% and 11.7% in 118-bus system, respectively.
International Nuclear Information System (INIS)
French, Roy; Stenger, Drake C.
2005-01-01
Structure of Wheat streak mosaic virus (WSMV) populations derived from a common founding event and subjected to serial passage at high multiplicity of infection (MOI) was evaluated. The founding population was generated by limiting dilution inoculation. Lineages of known pedigree were sampled at passage 9 (two populations) and at passage 15, with (three populations) or without mixing (four populations) of lineages at passage 10. Polymorphism within each population was assessed by sequencing 17-21 clones containing a 1371 nt region (WSMV-Sidney 81 nts 8001-9371) encompassing the entire coat protein cistron and flanking regions. Mutation frequency averaged ∼5.0 x 10 -4 /nt across all populations and ranged from 2.4 to 11.6 x 10 -4 /nt within populations, but did not consistently increase or decrease with the number of passages removed from the founding population. Shared substitutions (19 nonsynonymous, 10 synonymous, and 3 noncoding) occurred at 32 sites among 44 haplotypes. Only four substitutions became fixed (frequency = 100%) within a population and nearly one third (10/32) never achieved a frequency of 10% or greater in any sampled population. Shared substitutions were randomly distributed with respect to genome position, with transitions outnumbering transversions 5.4:1 and a clear bias for A to G and U to C substitutions. Haplotype composition of each population was unique with complexity of each population varying unpredictably, in that the number and frequency of haplotypes within a lineage were not correlated with number of passages removed from the founding population or whether the population was derived from a single or mixed lineage. The simplest explanation is that plant virus lineages, even those propagated at high MOI, are subject to frequent, narrow genetic bottlenecks during systemic movement that result in low effective population size and stochastic changes in population structure upon serial passage
Gottwald, G.A.; Crommelin, D.T.; Franzke, C.L.E.; Franzke, C.L.E.; O'Kane, T.J.
2017-01-01
In this chapter we review stochastic modelling methods in climate science. First we provide a conceptual framework for stochastic modelling of deterministic dynamical systems based on the Mori-Zwanzig formalism. The Mori-Zwanzig equations contain a Markov term, a memory term and a term suggestive of
Stability in distribution of a stochastic hybrid competitive Lotka–Volterra model with Lévy jumps
International Nuclear Information System (INIS)
Zhao, Yu; Yuan, Sanling
2016-01-01
Stability in distribution, implying the existence of the invariant probability measure, is an important measure of stochastic hybrid system. However, the effect of Lévy jumps on the stability in distribution is still unclear. In this paper, we consider a n-species competitive Lotka–Volterra model with Lévy jumps under regime-switching. First, we prove the existence of the global positive solution, obtain the upper and lower boundedness. Then, asymptotic stability in distribution as the main result of our paper is derived under some sufficient conditions. Finally, numerical simulations are carried out to support our theoretical results and a brief discussion is given.
International Nuclear Information System (INIS)
Gormezano, C.; Hess, W.; Ichtchenko, G.
1980-07-01
The already obtained data on the Wega Tokamak by lower hybrid heating (f=500 MHz - Psub(HF)=130 KW) are revisited in the light of recent theories on ion stochastic heating and quasi-linear electron Landau damping. It is possible to correctly estimate with these theories the fast ion mean energy, the H.F. power density coupled to the ions and that coupled to the electrons. The values of the parallel index of refraction, Nsub(//), which are necessary to obtain a good quantitative agreement between experiment and theoretical estimates, are the same for the ions and for the electrons, even though at higher values than expected
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
A hybrid POMDP-BDI agent architecture with online stochastic planning and plan caching
CSIR Research Space (South Africa)
Moodley, D
2016-12-01
Full Text Available This article presents an agent architecture for controlling an autonomous agent in stochastic, noisy environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI...
Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges
2012-05-01
A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition
International Nuclear Information System (INIS)
Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.
2008-01-01
A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient
Probabilistic Analysis Methods for Hybrid Ventilation
DEFF Research Database (Denmark)
Brohus, Henrik; Frier, Christian; Heiselberg, Per
This paper discusses a general approach for the application of probabilistic analysis methods in the design of ventilation systems. The aims and scope of probabilistic versus deterministic methods are addressed with special emphasis on hybrid ventilation systems. A preliminary application...... of stochastic differential equations is presented comprising a general heat balance for an arbitrary number of loads and zones in a building to determine the thermal behaviour under random conditions....
Kiesmüller, G.P.
2003-01-01
This paper addresses the control problem of a stochastic recovery system with two stocking points and different leadtimes for production and remanufacturing. For such systems the optimal control policy for a linear cost model is not known. Therefore, in the literature several heuristic policies are
Modeling and Analysis of Networked Control Systems Using Stochastic Hybrid Systems
2014-09-03
The stability notions considered can be classified in two broad categories: bounds on the probability that the state of the system “ misbehaves ” or...alternative types of condi- tions: One is focused on making sure that the probability that the stochastic process “ misbehaves ” is very small. Such
International Nuclear Information System (INIS)
Zare Oskouei, Morteza; Sadeghi Yazdankhah, Ahmad
2015-01-01
Highlights: • Two-stage objective function is proposed for optimization problem. • Hourly-based optimal contractual agreement is calculated. • Scenario-based stochastic optimization problem is solved. • Improvement of system frequency by utilizing PSH unit. - Abstract: This paper proposes the operating strategy of a micro grid connected wind farm, photovoltaic and pump-storage hybrid system. The strategy consists of two stages. In the first stage, the optimal hourly contractual agreement is determined. The second stage corresponds to maximizing its profit by adapting energy management strategy of wind and photovoltaic in coordination with optimum operating schedule of storage device under frequency based pricing for a day ahead electricity market. The pump-storage hydro plant is utilized to minimize unscheduled interchange flow and maximize the system benefit by participating in frequency control based on energy price. Because of uncertainties in power generation of renewable sources and market prices, generation scheduling is modeled by a stochastic optimization problem. Uncertainties of parameters are modeled by scenario generation and scenario reduction method. A powerful optimization algorithm is proposed using by General Algebraic Modeling System (GAMS)/CPLEX. In order to verify the efficiency of the method, the algorithm is applied to various scenarios with different wind and photovoltaic power productions in a day ahead electricity market. The numerical results demonstrate the effectiveness of the proposed approach.
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Sequential neural models with stochastic layers
DEFF Research Database (Denmark)
Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich
2016-01-01
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...
A stochastic model for hybrid off-grid wind power systems
Energy Technology Data Exchange (ETDEWEB)
Fouladgar, Javad [Inst. de Recherche en Electronique et en Electrotechnique de Nantes Atlantique (IREENA), Saint-Nazaire (France)
2008-07-01
Long-term wind speed and wind power forecasting of a hybrid installation are studied. A statistical approach based on Weibull distribution is used to predict the auxiliary power required or the exceeding power produced for an isolated site. The presence of a suitable storage system has been taken into account. (orig.)
Approximating Preemptive Stochastic Scheduling
Megow Nicole; Vredeveld Tjark
2009-01-01
We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...
Stochastic reactive power dispatch in hybrid power system with intermittent wind power generation
International Nuclear Information System (INIS)
Taghavi, Reza; Seifi, Ali Reza; Samet, Haidar
2015-01-01
Environmental concerns besides fuel costs are the predominant reasons for unprecedented escalating integration of wind turbine on power systems. Operation and planning of power systems are affected by this type of energy due to the intermittent nature of wind speed inputs with high uncertainty in the optimization output variables. Consequently, in order to model this high inherent uncertainty, a PRPO (probabilistic reactive power optimization) framework should be devised. Although MC (Monte-Carlo) techniques can solve the PRPO with high precision, PEMs (point estimate methods) can preserve the accuracy to attain reasonable results when diminishing the computational effort. Also, this paper introduces a methodology for optimally dispatching the reactive power in the transmission system, while minimizing the active power losses. The optimization problem is formulated as a LFP (linear fuzzy programing). The core of the problem lay on generation of 2m + 1 point estimates for solving PRPO, where n is the number of input stochastic variables. The proposed methodology is investigated using the IEEE-14 bus test system equipped with HVDC (high voltage direct current), UPFC (unified power flow controller) and DFIG (doubly fed induction generator) devices. The accuracy of the method is demonstrated in the case study. - Highlights: • This paper uses stochastic loads in optimization process. • AC–DC load flow is modified to use some advantages of DC part in optimization process. • UPFC and DFIG are simulated in a way that could be effective in optimization process. • Fuzzy set has been used as an uncertainty analysis tool in the optimization
Evolving Stochastic Learning Algorithm based on Tsallis entropic index
Anastasiadis, A. D.; Magoulas, G. D.
2006-03-01
In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.
International Nuclear Information System (INIS)
Moura, Scott J.; Fathy, Hosam K.; Stein, Jeffrey L.; Callaway, Duncan S.
2010-01-01
Recent results in plug-in hybrid electric vehicle (PHEV) power management research suggest that battery energy capacity requirements may be reduced through proper power management algorithm design. Specifically, algorithms which blend fuel and electricity during the charge depletion phase using smaller batteries may perform equally to algorithms that apply electric-only operation during charge depletion using larger batteries. The implication of this result is that ''blended'' power management algorithms may reduce battery energy capacity requirements, thereby lowering the acquisition costs of PHEVs. This article seeks to quantify the tradeoffs between power management algorithm design and battery energy capacity, in a systematic and rigorous manner. Namely, we (1) construct dynamic PHEV models with scalable battery energy capacities, (2) optimize power management using stochastic control theory, and (3) develop simulation methods to statistically quantify the performance tradeoffs. The degree to which blending enables smaller battery energy capacities is evaluated as a function of both daily driving distance and energy (fuel and electricity) pricing. (author)
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Bich, Cao Thi; Dat, Le Thanh; Van Hop, Nguyen; An, Nguyen Ba
2018-04-01
Entanglement plays a vital and in many cases non-replaceable role in the quantum network communication. Here, we propose two new protocols to jointly and remotely prepare a special so-called bipartite equatorial state which is hybrid in the sense that it entangles two Hilbert spaces with arbitrary different dimensions D and N (i.e., a type of entanglement between a quDit and a quNit). The quantum channels required to do that are however not necessarily hybrid. In fact, we utilize four high-dimensional Einstein-Podolsky-Rosen pairs, two of which are quDit-quDit entanglements, while the other two are quNit-quNit ones. In the first protocol the receiver has to be involved actively in the process of remote state preparation, while in the second protocol the receiver is passive as he/she needs to participate only in the final step for reconstructing the target hybrid state. Each protocol meets a specific circumstance that may be encountered in practice and both can be performed with unit success probability. Moreover, the concerned equatorial hybrid entangled state can also be jointly prepared for two receivers at two separated locations by slightly modifying the initial particles' distribution, thereby establishing between them an entangled channel ready for a later use.
Stochastic switching in biology: from genotype to phenotype
International Nuclear Information System (INIS)
Bressloff, Paul C
2017-01-01
There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1–1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker–Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel–Kramers–Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Langevin equation with the deterministic algebraically correlated noise
International Nuclear Information System (INIS)
Ploszajczak, M.; Srokowski, T.
1995-01-01
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise.
Bressloff, Paul C
2011-05-03
We extend the theory of noise-induced phase synchronization to the case of a neural master equation describing the stochastic dynamics of an ensemble of uncoupled neuronal population oscillators with intrinsic and extrinsic noise. The master equation formulation of stochastic neurodynamics represents the state of each population by the number of currently active neurons, and the state transitions are chosen so that deterministic Wilson-Cowan rate equations are recovered in the mean-field limit. We apply phase reduction and averaging methods to a corresponding Langevin approximation of the master equation in order to determine how intrinsic noise disrupts synchronization of the population oscillators driven by a common extrinsic noise source. We illustrate our analysis by considering one of the simplest networks known to generate limit cycle oscillations at the population level, namely, a pair of mutually coupled excitatory (E) and inhibitory (I) subpopulations. We show how the combination of intrinsic independent noise and extrinsic common noise can lead to clustering of the population oscillators due to the multiplicative nature of both noise sources under the Langevin approximation. Finally, we show how a similar analysis can be carried out for another simple population model that exhibits limit cycle oscillations in the deterministic limit, namely, a recurrent excitatory network with synaptic depression; inclusion of synaptic depression into the neural master equation now generates a stochastic hybrid system.
STOCHASTIC METHODS IN RISK ANALYSIS
Directory of Open Access Journals (Sweden)
Vladimíra OSADSKÁ
2017-06-01
Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.
Application of Hybrid Genetic Algorithm Routine in Optimizing Food and Bioengineering Processes
Directory of Open Access Journals (Sweden)
Jaya Shankar Tumuluru
2016-11-01
Full Text Available Optimization is a crucial step in the analysis of experimental results. Deterministic methods only converge on local optimums and require exponentially more time as dimensionality increases. Stochastic algorithms are capable of efficiently searching the domain space; however convergence is not guaranteed. This article demonstrates the novelty of the hybrid genetic algorithm (HGA, which combines both stochastic and deterministic routines for improved optimization results. The new hybrid genetic algorithm developed is applied to the Ackley benchmark function as well as case studies in food, biofuel, and biotechnology processes. For each case study, the hybrid genetic algorithm found a better optimum candidate than reported by the sources. In the case of food processing, the hybrid genetic algorithm improved the anthocyanin yield by 6.44%. Optimization of bio-oil production using HGA resulted in a 5.06% higher yield. In the enzyme production process, HGA predicted a 0.39% higher xylanase yield. Hybridization of the genetic algorithm with a deterministic algorithm resulted in an improved optimum compared to statistical methods.
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
National Research Council Canada - National Science Library
Khoo, Wai
1999-01-01
.... These problems model stochastic portfolio optimization problems (SPOPs) which assume deterministic unit weight, and normally distributed unit return with known mean and variance for each item type...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Directory of Open Access Journals (Sweden)
Edmundo Wallace Monteiro Lucas
2009-09-01
Full Text Available A modelagem hidrológica é uma importante ferramenta no planejamento e gerenciamento de programas de recursos hídricos de bacias hidrográficas. Neste trabalho, foi aplicado o modelo hidrológico determinístico mensal de dois parâmetros e o modelo estocástico, ARIMA, para simular a vazão mensal das sub-bacias da região hidrográfica do Xingu no Estado do Pará. O objetivo principal foi simular a vazão mensal através dos modelos e comparar os seus resultados. O modelo hidrológico determinístico aplicado possui uma estrutura simples e apresentou bons resultados, porém mostrou-se muito sensível a eventos extremos de precipitação. O modelo estocástico ARIMA, conseguiu capturar a dinâmica das séries temporais, apresentando resultados muito satisfatórios na simulação da vazão mensal nas estações da bacia. Ambos os modelos devem ser aplicados com cautela no período chuvoso, onde ocorrem os eventos extremos de precipitação e consequentemente vazões de pico.Hydrologic modeling is an important tool for the planning and management of water resources use in river basins. In this work, a two-parameter monthly deterministic hydrologic model and the stochastic model, ARIMA, were applied to simulate the monthly runoff of the Xingu river basin in the State of Pará. The main objective of this work is to simulate the monthly runoff using the two models and to compare their results. The applied hydrological deterministic model has a simple structure and presented good results, but seems to be very sensitive to extreme precipitation events. The stochastic model ARIMA was able to capture the dynamic of the temporal series, presenting very satisfactory results for the simulation of the monthly runoff at the basin stations. Both models should be applied with caution during the rainy season, when extreme precipitation events and consequently peaks of runoff occur.
Mélykúti, Bence; Burrage, Kevin; Zygalakis, Konstantinos C.
2010-01-01
The Chemical Langevin Equation (CLE), which is a stochastic differential equation driven by a multidimensional Wiener process, acts as a bridge between the discrete stochastic simulation algorithm and the deterministic reaction rate equation when
Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks
Ben Hammouda, Chiheb
2015-01-01
-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti
Optimal power flow: a bibliographic survey I. Formulations and deterministic methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Research on nonlinear stochastic dynamical price model
International Nuclear Information System (INIS)
Li Jiaorui; Xu Wei; Xie Wenxian; Ren Zhengzheng
2008-01-01
In consideration of many uncertain factors existing in economic system, nonlinear stochastic dynamical price model which is subjected to Gaussian white noise excitation is proposed based on deterministic model. One-dimensional averaged Ito stochastic differential equation for the model is derived by using the stochastic averaging method, and applied to investigate the stability of the trivial solution and the first-passage failure of the stochastic price model. The stochastic price model and the methods presented in this paper are verified by numerical studies
Linear stochastic neutron transport theory
International Nuclear Information System (INIS)
Lewins, J.
1978-01-01
A new and direct derivation of the Bell-Pal fundamental equation for (low power) neutron stochastic behaviour in the Boltzmann continuum model is given. The development includes correlation of particle emission direction in induced and spontaneous fission. This leads to generalizations of the backward and forward equations for the mean and variance of neutron behaviour. The stochastic importance for neutron transport theory is introduced and related to the conventional deterministic importance. Defining equations and moment equations are derived and shown to be related to the backward fundamental equation with the detector distribution of the operational definition of stochastic importance playing the role of an adjoint source. (author)
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Energy Technology Data Exchange (ETDEWEB)
Jammes, Ch
1997-11-28
The aim of this work is to create, validate theoretically and experimentally a calculation route for a thermal irradiation reactor. This is the research reactor of the University of Strasbourg, which presents all of characteristics of this reactor-type: compact and heterogeneous core, slab-type fuel with a high 235-uranium enrichment. This calculation route is based on the first use of the following two modern transport methods: the TDT method and the Monte Carlo method. The former, programmed within the APOLLO2 code, is a two dimensional collision probabilities method. The later, used by the TRIPOLI4 code, is a stochastic method. Both can be applied to complex geometries. After a few theoretical reminders about transport codes, a set of integral experiments is described which have been realized within the research reactor of the University of Strasbourg. One of them has been performed for this study. At the beginning of the theoretical part, significant errors are apparent due to the use of calculation route based on homogenization, condensation and the diffusion approximation. An extensive comparison between the discrete ordinates method and the TDT method carries out that the use of the TDT method is relevant for the studied reactor. The treatment of axial leakage with this method is the only disadvantage. Therefore, the use of the code TRIPOLI4 is recommended for a more accurate study of leakage within a reflector. By means of the experimental data, the ability of our calculation route is confirmed for essential neutronics questions such as the critical mass determination, the power distribution and the fuel management. (author)
Langevin equation with the deterministic algebraically correlated noise
Energy Technology Data Exchange (ETDEWEB)
Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.
Deterministic Diffusion in Delayed Coupled Maps
International Nuclear Information System (INIS)
Sozanski, M.
2005-01-01
Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)
Moix, Jeremy M.; Cao, Jianshu
2013-10-01
The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Exact and Approximate Stochastic Simulation of Intracellular Calcium Dynamics
Directory of Open Access Journals (Sweden)
Nicolas Wieder
2011-01-01
pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
Optimal Control of Hybrid Systems in Air Traffic Applications
Kamgarpour, Maryam
Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient
Threshold Dynamics of a Stochastic Chemostat Model with Two Nutrients and One Microorganism
Directory of Open Access Journals (Sweden)
Jian Zhang
2017-01-01
Full Text Available A new stochastic chemostat model with two substitutable nutrients and one microorganism is proposed and investigated. Firstly, for the corresponding deterministic model, the threshold for extinction and permanence of the microorganism is obtained by analyzing the stability of the equilibria. Then, for the stochastic model, the threshold of the stochastic chemostat for extinction and permanence of the microorganism is explored. Difference of the threshold of the deterministic model and the stochastic model shows that a large stochastic disturbance can affect the persistence of the microorganism and is harmful to the cultivation of the microorganism. To illustrate this phenomenon, we give some computer simulations with different intensity of stochastic noise disturbance.
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Stochastic processes in cell biology
Bressloff, Paul C
2014-01-01
This book develops the theory of continuous and discrete stochastic processes within the context of cell biology. A wide range of biological topics are covered including normal and anomalous diffusion in complex cellular environments, stochastic ion channels and excitable systems, stochastic calcium signaling, molecular motors, intracellular transport, signal transduction, bacterial chemotaxis, robustness in gene networks, genetic switches and oscillators, cell polarization, polymerization, cellular length control, and branching processes. The book also provides a pedagogical introduction to the theory of stochastic process – Fokker Planck equations, stochastic differential equations, master equations and jump Markov processes, diffusion approximations and the system size expansion, first passage time problems, stochastic hybrid systems, reaction-diffusion equations, exclusion processes, WKB methods, martingales and branching processes, stochastic calculus, and numerical methods. This text is primarily...
Stochastic Analysis : A Series of Lectures
Dozzi, Marco; Flandoli, Franco; Russo, Francesco
2015-01-01
This book presents in thirteen refereed survey articles an overview of modern activity in stochastic analysis, written by leading international experts. The topics addressed include stochastic fluid dynamics and regularization by noise of deterministic dynamical systems; stochastic partial differential equations driven by Gaussian or Lévy noise, including the relationship between parabolic equations and particle systems, and wave equations in a geometric framework; Malliavin calculus and applications to stochastic numerics; stochastic integration in Banach spaces; porous media-type equations; stochastic deformations of classical mechanics and Feynman integrals and stochastic differential equations with reflection. The articles are based on short courses given at the Centre Interfacultaire Bernoulli of the Ecole Polytechnique Fédérale de Lausanne, Switzerland, from January to June 2012. They offer a valuable resource not only for specialists, but also for other researchers and Ph.D. students in the fields o...
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Barnawi, Abdulwasa Bakr
Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of
International Nuclear Information System (INIS)
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Energy Technology Data Exchange (ETDEWEB)
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Stochastic ice stream dynamics.
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-09
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.
Dynamic stochastic optimization
Ermoliev, Yuri; Pflug, Georg
2004-01-01
Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu tions. Objective an...
Stochastic volatility and stochastic leverage
DEFF Research Database (Denmark)
Veraart, Almut; Veraart, Luitgard A. M.
This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Heart rate variability as determinism with jump stochastic parameters.
Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M
2013-08-01
We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.
A Nucleolus for Stochastic Cooperative Games
Suijs, J.P.M.
1996-01-01
This paper extends the definition of the nucleolus to stochastic cooperative games, that is, to cooperative games with random payoffs to the coalitions. It is shown that the nucleolus is nonempty and that it belongs to the core whenever the core is nonempty. Furthermore, it is shown for a particular class of stochastic cooperative games that the nucleolus can be determined by calculating the traditional nucleolus introduced by Schmeidler (1969) of a specific deterministic cooperative game.
A stochastic model of enzyme kinetics
Stefanini, Marianne; Newman, Timothy; McKane, Alan
2003-10-01
Enzyme kinetics is generally modeled by deterministic rate equations, and in the simplest case leads to the well-known Michaelis-Menten equation. It is plausible that stochastic effects will play an important role at low enzyme concentrations. We have addressed this by constructing a simple stochastic model which can be exactly solved in the steady-state. Throughout a wide range of parameter values Michaelis-Menten dynamics is replaced by a new and simple theoretical result.
Deterministic and Stochastic Semi-Empirical Transient Tire Models
Umsrithong, Anake
2012-01-01
The tire is one of the most important components of the vehicle. It has many functions, such as supporting the load of the vehicle, transmitting the forces which drive, brake and guide the vehicle, and acting as the secondary suspension to absorb the effect of road irregularities before transmitting the forces to the vehicle suspension. A tire is a complex reinforced rubber composite air container. The structure of the tire is very complex. It consists of several layers of synthetic polymer, ...
CHAOS AND STOCHASTICITY IN DETERMINISTICALLY GENERATED MULTIFRACTAL MEASURES. (R824780)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Stochastic and Deterministic Fluctuations in Stimulated Brillouin Scattering
1990-10-01
Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY Supervised by Professor Robert W. Boyd Accessionl For isGRA&I DTIC TAB The...earned a Master of Science degree in Optics in February of 1985. His thesis work has been conducted under the supervision of Professor Robert W. Boyd...by R. W. Boyd, M. G. Raymer , and L. M. Narducci 8. H. Haken, "Analogy between higher instabilities in fluids and lasers," Phys. Lett. 53A, 77 (1975
Rached, Nadhir B.
2013-12-01
The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Stochastic evolution of cosmological parameters in the early universe
Indian Academy of Sciences (India)
We develop a stochastic formulation of cosmology in the early universe, after considering the scatter in the redshift-apparent magnitude diagram in the early epochs as an observational evidence for the non-deterministic evolution of early universe. We consider the stochastic evolution of density parameter in the early ...
Automated Flight Routing Using Stochastic Dynamic Programming
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Winkelmann, Stefanie; Schütte, Christof
2017-09-01
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Winkelmann, Stefanie; Schütte, Christof
2017-09-21
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Rached, Nadhir B.
2014-01-06
A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Rached, Nadhir B.; Hoel, Haakon; Tempone, Raul
2014-01-01
A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
The intrinsic stochasticity of near-integrable Hamiltonian systems
Energy Technology Data Exchange (ETDEWEB)
Krlin, L [Ceskoslovenska Akademie Ved, Prague (Czechoslovakia). Ustav Fyziky Plazmatu
1989-09-01
Under certain conditions, the dynamics of near-integrable Hamiltonian systems appears to be stochastic. This stochasticity (intrinsic stochasticity, or deterministic chaos) is closely related to the Kolmogorov-Arnold-Moser (KAM) theorem of the stability of near-integrable multiperiodic Hamiltonian systems. The effect of the intrinsic stochasticity attracts still growing attention both in theory and in various applications in contemporary physics. The paper discusses the relation of the intrinsic stochasticity to the modern ergodic theory and to the KAM theorem, and describes some numerical experiments on related astrophysical and high-temperature plasma problems. Some open questions are mentioned in conclusion. (author).
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Optimal Stochastic Modeling and Control of Flexible Structures
1988-09-01
1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic
Characterizing economic trends by Bayesian stochastic model specifi cation search
Grassi, Stefano; Proietti, Tommaso
2010-01-01
We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate...
Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series
Zhang, Zhihua
2014-01-01
Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
On the adaptivity gap of stochastic orienteering
Bansal, N.; Nagarajan, V.
2013-01-01
The input to the stochastic orienteering problem consists of a budget $B$ and metric $(V,d)$ where each vertex $v$ has a job with deterministic reward and random processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a
On the Adaptivity Gap of Stochastic Orienteering
N. Bansal (Nikhil); V. Nagarajan
2013-01-01
htmlabstractThe input to the stochastic orienteering problem consists of a budget B and metric (V,d) where each vertex v has a job with deterministic reward and random processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a
On the adaptivity gap of stochastic orienteering
Bansal, N.; Nagarajan, V.; Lee, J.; Vygen, J.
2014-01-01
The input to the stochastic orienteering problem [14] consists of a budget B and metric (V,d) where each vertex v¿¿¿V has a job with a deterministic reward and a random processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a
On the adaptivity gap of stochastic orienteering
Bansal, N.; Nagarajan, V.
2015-01-01
The input to the stochastic orienteering problem (Gupta et al. in SODA, pp 1522–1538, 2012) consists of a budget B and metric (V, d) where each vertex(Formula presented.) has a job with a deterministic reward and a random processing time (drawn from a known distribution). The processing times are
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
International Nuclear Information System (INIS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2017-01-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
Nataša Krejić
2014-12-01
Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Stochasticity Modeling in Memristors
Naous, Rawan; Al-Shedivat, Maruan; Salama, Khaled N.
2015-01-01
Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.
Stochasticity Modeling in Memristors
Naous, Rawan
2015-10-26
Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
Stochasticity induced by coherent wavepackets
International Nuclear Information System (INIS)
Fuchs, V.; Krapchev, V.; Ram, A.; Bers, A.
1983-02-01
We consider the momentum transfer and diffusion of electrons periodically interacting with a coherent longitudinal wavepacket. Such a problem arises, for example, in lower-hybrid current drive. We establish the stochastic threshold, the stochastic region δv/sub stoch/ in velocity space, the associated momentum transfer j, and the diffusion coefficient D. We concentrate principally on the weak-field regime, tau/sub autocorrelation/ < tau/sub bounce/
Hybrid three-dimensional variation and particle filtering for nonlinear systems
International Nuclear Information System (INIS)
Leng Hong-Ze; Song Jun-Qiang
2013-01-01
This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations. We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method, which combines the advantages of 3DVar and particle-based filters. By minimizing the cost function, this approach will produce a better proposal distribution of the state. Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme. The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF, especially in highly nonlinear systems
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
ICRP (1991) and deterministic effects
International Nuclear Information System (INIS)
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Boyer, Christopher N.; Larson, James A.; Roberts, Roland K.; McClure, Angela T.; Tyler, Donald D.; Zhou, Vivian
2013-01-01
Deterministic and stochastic yield response plateau functions were estimated to determine the expected profit-maximizing nitrogen rates, yields, and net returns for corn grown after corn, cotton, and soybeans. The stochastic response functions were more appropriate than their deterministic counterparts, and the linear response stochastic plateau described the data the best. The profit-maximizing nitrogen rates were similar for corn after corn, cotton, and soybeans, but relative to corn after ...
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
A hybrid algorithm for coupling partial differential equation and compartment-based dynamics.
Harrison, Jonathan U; Yates, Christian A
2016-09-01
Stochastic simulation methods can be applied successfully to model exact spatio-temporally resolved reaction-diffusion systems. However, in many cases, these methods can quickly become extremely computationally intensive with increasing particle numbers. An alternative description of many of these systems can be derived in the diffusive limit as a deterministic, continuum system of partial differential equations (PDEs). Although the numerical solution of such PDEs is, in general, much more efficient than the full stochastic simulation, the deterministic continuum description is generally not valid when copy numbers are low and stochastic effects dominate. Therefore, to take advantage of the benefits of both of these types of models, each of which may be appropriate in different parts of a spatial domain, we have developed an algorithm that can be used to couple these two types of model together. This hybrid coupling algorithm uses an overlap region between the two modelling regimes. By coupling fluxes at one end of the interface and using a concentration-matching condition at the other end, we ensure that mass is appropriately transferred between PDE- and compartment-based regimes. Our methodology gives notable reductions in simulation time in comparison with using a fully stochastic model, while maintaining the important stochastic features of the system and providing detail in appropriate areas of the domain. We test our hybrid methodology robustly by applying it to several biologically motivated problems including diffusion and morphogen gradient formation. Our analysis shows that the resulting error is small, unbiased and does not grow over time. © 2016 The Authors.
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Stochastic persistence and stationary distribution in an SIS epidemic model with media coverage
Guo, Wenjuan; Cai, Yongli; Zhang, Qimin; Wang, Weiming
2018-02-01
This paper aims to study an SIS epidemic model with media coverage from a general deterministic model to a stochastic differential equation with environment fluctuation. Mathematically, we use the Markov semigroup theory to prove that the basic reproduction number R0s can be used to control the dynamics of stochastic system. Epidemiologically, we show that environment fluctuation can inhibit the occurrence of the disease, namely, in the case of disease persistence for the deterministic model, the disease still dies out with probability one for the stochastic model. So to a great extent the stochastic perturbation under media coverage affects the outbreak of the disease.
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad; Nobile, Fabio; Tempone, Raul
2012-01-01
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical
Deterministic chaos in entangled eigenstates
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Energy Technology Data Exchange (ETDEWEB)
Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)
2008-05-12
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
International Nuclear Information System (INIS)
Schlegel, K.G.; Foerster, S.
2008-01-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Stochastic models: theory and simulation.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
A Stochastic Model for Malaria Transmission Dynamics
Directory of Open Access Journals (Sweden)
Rachel Waema Mbogo
2018-01-01
Full Text Available Malaria is one of the three most dangerous infectious diseases worldwide (along with HIV/AIDS and tuberculosis. In this paper we compare the disease dynamics of the deterministic and stochastic models in order to determine the effect of randomness in malaria transmission dynamics. Relationships between the basic reproduction number for malaria transmission dynamics between humans and mosquitoes and the extinction thresholds of corresponding continuous-time Markov chain models are derived under certain assumptions. The stochastic model is formulated using the continuous-time discrete state Galton-Watson branching process (CTDSGWbp. The reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or die out. Thresholds for disease extinction from stochastic models contribute crucial knowledge on disease control and elimination and mitigation of infectious diseases. Analytical and numerical results show some significant differences in model predictions between the stochastic and deterministic models. In particular, we find that malaria outbreak is more likely if the disease is introduced by infected mosquitoes as opposed to infected humans. These insights demonstrate the importance of a policy or intervention focusing on controlling the infected mosquito population if the control of malaria is to be realized.
Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
The stochastic energy-Casimir method
Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.
2018-04-01
In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"
Dynamics of non-holonomic systems with stochastic transport
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
Dynamic-stochastic modeling of snow cover formation on the European territory of Russia
A. N. Gelfan; V. M. Moreido
2014-01-01
A dynamic-stochastic model, which combines a deterministic model of snow cover formation with a stochastic weather generator, has been developed. The deterministic snow model describes temporal change of the snow depth, content of ice and liquid water, snow density, snowmelt, sublimation, re-freezing of melt water, and snow metamorphism. The model has been calibrated and validated against the long-term data of snow measurements over the territory of the European Russia. The model showed good ...
Parzen, Emanuel
1962-01-01
Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine
Using stochastic models to incorporate spatial and temporal variability [Exercise 14
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...
International Nuclear Information System (INIS)
Klauder, J.R.
1983-01-01
The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)
Schilstra, Maria J; Martin, Stephen R
2009-01-01
Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.
Energy Technology Data Exchange (ETDEWEB)
Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard, E-mail: milena.wollmann@ufrgs.br, E-mail: vilhena@mat.ufrgs.br, E-mail: bardobodmann@ufrgs.br, E-mail: richard.vasques@fulbrightmail.org [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2015-07-01
The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)
International Nuclear Information System (INIS)
Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard
2015-01-01
The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)
Extending Stochastic Network Calculus to Loss Analysis
Directory of Open Access Journals (Sweden)
Chao Luo
2013-01-01
Full Text Available Loss is an important parameter of Quality of Service (QoS. Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor.
Stochastic Effects; Application in Nuclear Physics
International Nuclear Information System (INIS)
Mazonka, O.
2000-04-01
Stochastic effects in nuclear physics refer to the study of the dynamics of nuclear systems evolving under stochastic equations of motion. In this dissertation we restrict our attention to classical scattering models. We begin with introduction of the model of nuclear dynamics and deterministic equations of evolution. We apply a Langevin approach - an additional property of the model, which reflect the statistical nature of low energy nuclear behaviour. We than concentrate our attention on the problem of calculating tails of distribution functions, which actually is the problem of calculating probabilities of rare outcomes. Two general strategies are proposed. Result and discussion follow. Finally in the appendix we consider stochastic effects in nonequilibrium systems. A few exactly solvable models are presented. For one model we show explicitly that stochastic behaviour in a microscopic description can lead to ordered collective effects on the macroscopic scale. Two others are solved to confirm the predictions of the fluctuation theorem. (author)
Transport stochastic multi-dimensional media
International Nuclear Information System (INIS)
Haran, O.; Shvarts, D.
1996-01-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors)
Transport stochastic multi-dimensional media
Energy Technology Data Exchange (ETDEWEB)
Haran, O; Shvarts, D [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Thiberger, R [Ben-Gurion Univ. of the Negev, Beersheba (Israel)
1996-12-01
Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors).
Stochastic chaos in a Duffing oscillator and its control
International Nuclear Information System (INIS)
Wu Cunli; Lei Youming; Fang Tong
2006-01-01
Stochastic chaos discussed here means a kind of chaotic responses in a Duffing oscillator with bounded random parameters under harmonic excitations. A system with random parameters is usually called a stochastic system. The modifier 'stochastic' here implies dependent on some random parameter. As the system itself is stochastic, so is the response, even under harmonic excitations alone. In this paper stochastic chaos and its control are verified by the top Lyapunov exponent of the system. A non-feedback control strategy is adopted here by adding an adjustable noisy phase to the harmonic excitation, so that the control can be realized by adjusting the noise level. It is found that by this control strategy stochastic chaos can be tamed down to the small neighborhood of a periodic trajectory or an equilibrium state. In the analysis the stochastic Duffing oscillator is first transformed into an equivalent deterministic nonlinear system by the Gegenbauer polynomial approximation, so that the problem of controlling stochastic chaos can be reduced into the problem of controlling deterministic chaos in the equivalent system. Then the top Lyapunov exponent of the equivalent system is obtained by Wolf's method to examine the chaotic behavior of the response. Numerical simulations show that the random phase control strategy is an effective way to control stochastic chaos
Limits for Stochastic Reaction Networks
DEFF Research Database (Denmark)
Cappelletti, Daniele
Reaction systems have been introduced in the 70s to model biochemical systems. Nowadays their range of applications has increased and they are fruitfully used in dierent elds. The concept is simple: some chemical species react, the set of chemical reactions form a graph and a rate function...... is associated with each reaction. Such functions describe the speed of the dierent reactions, or their propensities. Two modelling regimes are then available: the evolution of the dierent species concentrations can be deterministically modelled through a system of ODE, while the counts of the dierent species...... at a certain time are stochastically modelled by means of a continuous-time Markov chain. Our work concerns primarily stochastic reaction systems, and their asymptotic properties. In Paper I, we consider a reaction system with intermediate species, i.e. species that are produced and fast degraded along a path...
On the use of reverse Brownian motion to accelerate hybrid simulations
Energy Technology Data Exchange (ETDEWEB)
Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu
2017-04-01
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategies for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.
STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...
African Journals Online (AJOL)
eobe
STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.
Ranking shortest paths in Stochastic time-denpendent networks
DEFF Research Database (Denmark)
Nielsen, Lars Relund; Andersen, Kim Allan; Pretolani, Daniele
A substantial amount of research has been devoted to the shortest path problem in networks where travel times are stochastic or (deterministic and) time-dependent. More recently, a growing interest has been attracted by networks that are both stochastic and time-dependent. In these networks, the ...... present a computational comparison of time-adaptive and a priori route choices, pointing out the effect of travel time and cost distributions. The reported results show that, under realistic distributions, our solution methods are effective.......A substantial amount of research has been devoted to the shortest path problem in networks where travel times are stochastic or (deterministic and) time-dependent. More recently, a growing interest has been attracted by networks that are both stochastic and time-dependent. In these networks...
K shortest paths in stochastic time-dependent networks
DEFF Research Database (Denmark)
Nielsen, Lars Relund; Pretolani, Daniele; Andersen, Kim Allan
2004-01-01
A substantial amount of research has been devoted to the shortest path problem in networks where travel times are stochastic or (deterministic and) time-dependent. More recently, a growing interest has been attracted by networks that are both stochastic and time-dependent. In these networks, the ...... present a computational comparison of time-adaptive and a priori route choices, pointing out the effect of travel time and cost distributions. The reported results show that, under realistic distributions, our solution methods are effective.......A substantial amount of research has been devoted to the shortest path problem in networks where travel times are stochastic or (deterministic and) time-dependent. More recently, a growing interest has been attracted by networks that are both stochastic and time-dependent. In these networks...
Stochastic Watershed Models for Risk Based Decision Making
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Predicting Footbridge Response using Stochastic Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...
Stochastic Modeling of Traffic Air Pollution
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2014-01-01
In this paper, modeling of traffic air pollution is discussed with special reference to infrastructures. A number of subjects related to health effects of air pollution and the different types of pollutants are briefly presented. A simple model for estimating the social cost of traffic related air...... and using simple Monte Carlo techniques to obtain a stochastic estimate of the costs of traffic air pollution for infrastructures....... pollution is derived. Several authors have published papers on this very complicated subject, but no stochastic modelling procedure have obtained general acceptance. The subject is discussed basis of a deterministic model. However, it is straightforward to modify this model to include uncertain parameters...
Scattering theory of stochastic electromagnetic light waves.
Wang, Tao; Zhao, Daomu
2010-07-15
We generalize scattering theory to stochastic electromagnetic light waves. It is shown that when a stochastic electromagnetic light wave is scattered from a medium, the properties of the scattered field can be characterized by a 3 x 3 cross-spectral density matrix. An example of scattering of a spatially coherent electromagnetic light wave from a deterministic medium is discussed. Some interesting phenomena emerge, including the changes of the spectral degree of coherence and of the spectral degree of polarization of the scattered field.
Neuro-Inspired Computing with Stochastic Electronics
Naous, Rawan
2016-01-06
The extensive scaling and integration within electronic systems have set the standards for what is addressed to as stochastic electronics. The individual components are increasingly diverting away from their reliable behavior and producing un-deterministic outputs. This stochastic operation highly mimics the biological medium within the brain. Hence, building on the inherent variability, particularly within novel non-volatile memory technologies, paves the way for unconventional neuromorphic designs. Neuro-inspired networks with brain-like structures of neurons and synapses allow for computations and levels of learning for diverse recognition tasks and applications.
Stochastic 2-D Navier-Stokes Equation
International Nuclear Information System (INIS)
Menaldi, J.L.; Sritharan, S.S.
2002-01-01
In this paper we prove the existence and uniqueness of strong solutions for the stochastic Navier-Stokes equation in bounded and unbounded domains. These solutions are stochastic analogs of the classical Lions-Prodi solutions to the deterministic Navier-Stokes equation. Local monotonicity of the nonlinearity is exploited to obtain the solutions in a given probability space and this significantly improves the earlier techniques for obtaining strong solutions, which depended on pathwise solutions to the Navier-Stokes martingale problem where the probability space is also obtained as a part of the solution
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Frequency domain fatigue damage estimation methods suitable for deterministic load spectra
Energy Technology Data Exchange (ETDEWEB)
Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)
2000-07-01
The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)
On stochastic differential equations with random delay
International Nuclear Information System (INIS)
Krapivsky, P L; Luck, J M; Mallick, K
2011-01-01
We consider stochastic dynamical systems defined by differential equations with a uniform random time delay. The latter equations are shown to be equivalent to deterministic higher-order differential equations: for an nth-order equation with random delay, the corresponding deterministic equation has order n + 1. We analyze various examples of dynamical systems of this kind, and find a number of unusual behaviors. For instance, for the harmonic oscillator with random delay, the energy grows as exp((3/2) t 2/3 ) in reduced units. We then investigate the effect of introducing a discrete time step ε. At variance with the continuous situation, the discrete random recursion relations thus obtained have intrinsic fluctuations. The crossover between the fluctuating discrete problem and the deterministic continuous one as ε goes to zero is studied in detail on the example of a first-order linear differential equation
Water Quality Trading when Nonpoint Pollution Loads are Stochastic
Ghosh, Gaurav; Shortle, James
2009-01-01
We compare two tradable permit markets in their ability to meet a stated environmental target at least cost when some polluters have stochastic and non-measurable emissions. The environmental target is of the safety-first type, which requires probabilistic emissions control. One market is built around the trading ratio, which defines the substitution rate between stochastic and deterministic pollution, and is modeled on existing markets for water quality trading. The other market is built aro...
Analysis of dynamic regimes in stochastically forced Kaldor model
International Nuclear Information System (INIS)
Bashkirtseva, Irina; Ryazanova, Tatyana; Ryashko, Lev
2015-01-01
We consider the business cycle Kaldor model forced by random noise. Detailed parametric analysis of deterministic system is carried out and zones of coexisting stable equilibrium and stable limit cycle are found. Noise-induced transitions between these attractors are studied using stochastic sensitivity function technique and confidence domains method. Critical values of noise intensity corresponding to noise-induced transitions “equilibrium → cycle” and “cycle → equilibrium” are estimated. Dominants in combined stochastic regimes are discussed.
A stochastic programming approach to manufacturing flow control
Haurie, Alain; Moresino, Francesco
2012-01-01
This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...
Almost Periodic Solutions for Impulsive Fractional Stochastic Evolution Equations
Directory of Open Access Journals (Sweden)
Toufik Guendouzi
2014-08-01
Full Text Available In this paper, we consider the existence of square-mean piecewise almost periodic solutions for impulsive fractional stochastic evolution equations involving Caputo fractional derivative. The main results are obtained by means of the theory of operators semi-group, fractional calculus, fixed point technique and stochastic analysis theory and methods adopted directly from deterministic fractional equations. Some known results are improved and generalized.
A stochastic analysis for a phytoplankton-zooplankton model
International Nuclear Information System (INIS)
Ge, G; Wang, H-L; Xu, J
2008-01-01
A simple phytoplankton-zooplankton nonlinear dynamical model was proposed to study the coexistence of all the species and a Hopf bifurcation was observed. In order to study the effect of environmental robustness on this system, we have stochastically perturbed the system with respect to white noise around its positive interior equilibrium. We have observed that the system remains stochastically stable around the positive equilibrium for same parametric values in the deterministic situation
Pfaff, W.; Vos, A.; Hanson, R.
2013-01-01
Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures
Chang, Mou-Hsiung
2015-01-01
The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...
Planning under uncertainty solving large-scale stochastic linear programs
Energy Technology Data Exchange (ETDEWEB)
Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Location deterministic biosensing from quantum-dot-nanowire assemblies
International Nuclear Information System (INIS)
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Stochastic Community Assembly: Does It Matter in Microbial Ecology?
Zhou, Jizhong; Ning, Daliang
2017-12-01
Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Directory of Open Access Journals (Sweden)
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Tamellini, L.; Le Maî tre, O.; Nouy, A.
2014-01-01
In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.
Moraes, Alvaro
2015-01-01
Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference
Analysis of a Stochastic Chemical System Close to a SNIPER Bifurcation of Its Mean-Field Model
Erban, Radek; Chapman, S. Jonathan; Kevrekidis, Ioannis G.; Vejchodský , Tomá š
2009-01-01
A framework for the analysis of stochastic models of chemical systems for which the deterministic mean-field description is undergoing a saddle-node infinite period (SNIPER) bifurcation is presented. Such a bifurcation occurs, for example
Gottwald, Georg; Melbourne, Ian
2013-04-01
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
Stochastic resonance and coherence resonance in groundwater-dependent plant ecosystems.
Borgogno, Fabio; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2012-01-21
Several studies have shown that non-linear deterministic dynamical systems forced by external random components can give rise to unexpectedly regular temporal behaviors. Stochastic resonance and coherence resonance, the two best known processes of this type, have been studied in a number of physical and chemical systems. Here, we explore their possible occurrence in the dynamics of groundwater-dependent plant ecosystems. To this end, we develop two eco-hydrological models, which allow us to demonstrate that stochastic and coherence resonance may emerge in the dynamics of phreatophyte vegetation, depending on their deterministic properties and the intensity of external stochastic drivers. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Bisognano, J.; Leemann, C.
1982-03-01
Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron
Deterministic and unambiguous dense coding
International Nuclear Information System (INIS)
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
New Exact Solutions for the Wick-Type Stochastic Kudryashov–Sinelshchikov Equation
International Nuclear Information System (INIS)
Ray, S. Saha; Singh, S.
2017-01-01
In this article, exact solutions of Wick-type stochastic Kudryashov–Sinelshchikov equation have been obtained by using improved Sub-equation method. We have used Hermite transform for transforming the Wick-type stochastic Kudryashov–Sinelshchikov equation to deterministic partial differential equation. Also we have applied inverse Hermite transform for obtaining a set of stochastic solutions in the white noise space. (paper)
Mild Solutions of Neutral Stochastic Partial Functional Differential Equations
Directory of Open Access Journals (Sweden)
T. E. Govindan
2011-01-01
Full Text Available This paper studies the existence and uniqueness of a mild solution for a neutral stochastic partial functional differential equation using a local Lipschitz condition. When the neutral term is zero and even in the deterministic special case, the result obtained here appears to be new. An example is included to illustrate the theory.
Stochastic Analysis of Differential GPS Surveys for Earth Dam ...
African Journals Online (AJOL)
In GPS measurement, we try to model not just the deterministic part of the measurement but also try to account for their stochastic behavior using the measurement variance-covariance matrix. The variance-covariance matrices are computed as part of a least squares adjustment. In this study, the results of GPS survey by ...
DEFF Research Database (Denmark)
Chon, K H; Hoyer, D; Armoundas, A A
1999-01-01
In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...
Stochastic p -Bits for Invertible Logic
Camsari, Kerem Yunus; Faria, Rafatul; Sutton, Brian M.; Datta, Supriyo
2017-07-01
Conventional semiconductor-based logic and nanomagnet-based memory devices are built out of stable, deterministic units such as standard metal-oxide semiconductor transistors, or nanomagnets with energy barriers in excess of ≈40 - 60 kT . In this paper, we show that unstable, stochastic units, which we call "p -bits," can be interconnected to create robust correlations that implement precise Boolean functions with impressive accuracy, comparable to standard digital circuits. At the same time, they are invertible, a unique property that is absent in standard digital circuits. When operated in the direct mode, the input is clamped, and the network provides the correct output. In the inverted mode, the output is clamped, and the network fluctuates among all possible inputs that are consistent with that output. First, we present a detailed implementation of an invertible gate to bring out the key role of a single three-terminal transistorlike building block to enable the construction of correlated p -bit networks. The results for this specific, CMOS-assisted nanomagnet-based hardware implementation agree well with those from a universal model for p -bits, showing that p -bits need not be magnet based: any three-terminal tunable random bit generator should be suitable. We present a general algorithm for designing a Boltzmann machine (BM) with a symmetric connection matrix [J ] (Ji j=Jj i) that implements a given truth table with p -bits. The [J ] matrices are relatively sparse with a few unique weights for convenient hardware implementation. We then show how BM full adders can be interconnected in a partially directed manner (Ji j≠Jj i) to implement large logic operations such as 32-bit binary addition. Hundreds of stochastic p -bits get precisely correlated such that the correct answer out of 233 (≈8 ×1 09) possibilities can be extracted by looking at the statistical mode or majority vote of a number of time samples. With perfect directivity (Jj i=0 ) a small
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Stochastic failure modelling of unidirectional composite ply failure
International Nuclear Information System (INIS)
Whiteside, M.B.; Pinho, S.T.; Robinson, P.
2012-01-01
Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
International Nuclear Information System (INIS)
Majidi, Majid; Nojavan, Sayyad; Zare, Kazem
2017-01-01
Highlights: • On-grid photovoltaic/battery/fuel cell system is considered as hybrid system. • Thermal and electrical operation of hybrid energy system is studied. • Hybrid energy system is used to reduce dependency on upstream grid for load serving. • Demand response program is proposed to manage the electrical load. • Demand response program is proposed to reduce hybrid energy system’s operation cost. - Abstract: In this paper, cost-efficient operation problem of photovoltaic/battery/fuel cell hybrid energy system has been evaluated in the presence of demand response program. Each load curve has off-peak, mid and peak time periods in which the energy prices are different. Demand response program transfers some amount of load from peak periods to other periods to flatten the load curve and minimize total cost. So, the main goal is to meet the energy demand and propose a cost-efficient approach to minimize system’s total cost including system’s electrical cost and thermal cost and the revenue from exporting power to the upstream grid. A battery has been utilized as an electrical energy storage system and a heat storage tank is used as a thermal energy storage system to save energy in off-peak and mid-peak hours and then supply load in peak hours which leads to reduction of cost. The proposed cost-efficient operation problem of photovoltaic/battery/fuel cell hybrid energy system is modeled by a mixed-integer linear program and solved by General algebraic modeling system optimization software under CPLEX solver. Two case studies are investigated to show the effects of demand response program on reduction of total cost.
DEFF Research Database (Denmark)
Govindan, Kannan; Jafarian, Ahmad; Nourbakhsh, Vahid
2015-01-01
simultaneously considering the sustainable OAP in the sustainable SCND as a strategic decision. The proposed supply chain network is composed of five echelons including suppliers classified in different classes, plants, distribution centers that dispatch products via two different ways, direct shipment......, a novel multi-objective hybrid approach called MOHEV with two strategies for its best particle selection procedure (BPSP), minimum distance, and crowding distance is proposed. MOHEV is constructed through hybridization of two multi-objective algorithms, namely the adapted multi-objective electromagnetism...
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Crisan, Dan
2011-01-01
"Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa
Optimal Control Inventory Stochastic With Production Deteriorating
Affandi, Pardi
2018-01-01
In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.
Stochastic quantum mechanics and quantum spacetime
International Nuclear Information System (INIS)
Prugovecki, E.
1984-01-01
This monograph's principal intent is to provide a systematic and self-contained introduction to an alternative unification of relativity with quantum theory based on stochastic phase spaces and stochastic geometries, and presented at a level accessible to graduate students in theoretical and mathematical physics as well as to professional physicists and mathematicians. The proposed framework for unification embraces classical as well as quantum theories by implementing an epistemic idea first put forth by M. Born, namely that all physical theories should be formulated in terms of stochastic rather than deterministic values for measurable quantities. The framework gives rise to a whole range of yet unresearched problems, whose solutions are bound to shed some light on the relationship between relativity and quantum theories of the most fundamental physical and mathematical levels. (Auth.)
Stochastic partial differential equations an introduction
Liu, Wei
2015-01-01
This book provides an introduction to the theory of stochastic partial differential equations (SPDEs) of evolutionary type. SPDEs are one of the main research directions in probability theory with several wide ranging applications. Many types of dynamics with stochastic influence in nature or man-made complex systems can be modelled by such equations. The theory of SPDEs is based both on the theory of deterministic partial differential equations, as well as on modern stochastic analysis. Whilst this volume mainly follows the ‘variational approach’, it also contains a short account on the ‘semigroup (or mild solution) approach’. In particular, the volume contains a complete presentation of the main existence and uniqueness results in the case of locally monotone coefficients. Various types of generalized coercivity conditions are shown to guarantee non-explosion, but also a systematic approach to treat SPDEs with explosion in finite time is developed. It is, so far, the only book where the latter and t...
Reserves and cash flows under stochastic retirement
DEFF Research Database (Denmark)
Gad, Kamille Sofie Tågholt; Nielsen, Jeppe Woetmann
2016-01-01
Uncertain time of retirement and uncertain structure of retirement benefits are risk factors for life insurance companies. Nevertheless, classical life insurance models assume these are deterministic. In this paper, we include the risk from stochastic time of retirement and stochastic benefit...... structure in a classical finite-state Markov model for a life insurance contract. We include discontinuities in the distribution of the retirement time. First, we derive formulas for appropriate scaling of the benefits according to the time of retirement and discuss the link between the scaling...... and the guarantees provided. Stochastic retirement creates a need to rethink the construction of disability products for high ages and ways to handle this are discussed. We show how to calculate market reserves and how to use modified transition probabilities to calculate expected cash flows without significantly...
Borodin, Andrei N
2017-01-01
This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Hybrid SN/Monte Carlo research and results
International Nuclear Information System (INIS)
Baker, R.S.
1993-01-01
The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well
Electricity Market Stochastic Dynamic Model and Its Mean Stability Analysis
Directory of Open Access Journals (Sweden)
Zhanhui Lu
2014-01-01
Full Text Available Based on the deterministic dynamic model of electricity market proposed by Alvarado, a stochastic electricity market model, considering the random nature of demand sides, is presented in this paper on the assumption that generator cost function and consumer utility function are quadratic functions. The stochastic electricity market model is a generalization of the deterministic dynamic model. Using the theory of stochastic differential equations, stochastic process theory, and eigenvalue techniques, the determining conditions of the mean stability for this electricity market model under small Gauss type random excitation are provided and testified theoretically. That is, if the demand elasticity of suppliers is nonnegative and the demand elasticity of consumers is negative, then the stochastic electricity market model is mean stable. It implies that the stability can be judged directly by initial data without any computation. Taking deterministic electricity market data combined with small Gauss type random excitation as numerical samples to interpret random phenomena from a statistical perspective, the results indicate the conclusions above are correct, valid, and practical.
Front propagation and clustering in the stochastic nonlocal Fisher equation
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Adaptation in stochastic environments
Clark, Colib
1993-01-01
The classical theory of natural selection, as developed by Fisher, Haldane, and 'Wright, and their followers, is in a sense a statistical theory. By and large the classical theory assumes that the underlying environment in which evolution transpires is both constant and stable - the theory is in this sense deterministic. In reality, on the other hand, nature is almost always changing and unstable. We do not yet possess a complete theory of natural selection in stochastic environ ments. Perhaps it has been thought that such a theory is unimportant, or that it would be too difficult. Our own view is that the time is now ripe for the development of a probabilistic theory of natural selection. The present volume is an attempt to provide an elementary introduction to this probabilistic theory. Each author was asked to con tribute a simple, basic introduction to his or her specialty, including lively discussions and speculation. We hope that the book contributes further to the understanding of the roles of "Cha...
A stochastic framework for clearing of reactive power market
International Nuclear Information System (INIS)
Amjady, N.; Rabiee, A.; Shayanfar, H.A.
2010-01-01
This paper presents a new stochastic framework for clearing of day-ahead reactive power market. The uncertainty of generating units in the form of system contingencies are considered in the reactive power market-clearing procedure by the stochastic model in two steps. The Monte-Carlo Simulation (MCS) is first used to generate random scenarios. Then, in the second step, the stochastic market-clearing procedure is implemented as a series of deterministic optimization problems (scenarios) including non-contingent scenario and different post-contingency states. In each of these deterministic optimization problems, the objective function is total payment function (TPF) of generators which refers to the payment paid to the generators for their reactive power compensation. The effectiveness of the proposed model is examined based on the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS). (author)
Hopf Bifurcation of Compound Stochastic van der Pol System
Directory of Open Access Journals (Sweden)
Shaojuan Ma
2016-01-01
Full Text Available Hopf bifurcation analysis for compound stochastic van der Pol system with a bound random parameter and Gaussian white noise is investigated in this paper. By the Karhunen-Loeve (K-L expansion and the orthogonal polynomial approximation, the equivalent deterministic van der Pol system can be deduced. Based on the bifurcation theory of nonlinear deterministic system, the critical value of bifurcation parameter is obtained and the influence of random strength δ and noise intensity σ on stochastic Hopf bifurcation in compound stochastic system is discussed. At last we found that increased δ can relocate the critical value of bifurcation parameter forward while increased σ makes it backward and the influence of δ is more sensitive than σ. The results are verified by numerical simulations.
Lyapunov functionals and stability of stochastic functional differential equations
Shaikhet, Leonid
2013-01-01
Stability conditions for functional differential equations can be obtained using Lyapunov functionals. Lyapunov Functionals and Stability of Stochastic Functional Differential Equations describes the general method of construction of Lyapunov functionals to investigate the stability of differential equations with delays. This work continues and complements the author’s previous book Lyapunov Functionals and Stability of Stochastic Difference Equations, where this method is described for discrete- and continuous-time difference equations. The text begins with a description of the peculiarities of deterministic and stochastic functional differential equations. There follow basic definitions for stability theory of stochastic hereditary systems, and a formal procedure of Lyapunov functionals construction is presented. Stability investigation is conducted for stochastic linear and nonlinear differential equations with constant and distributed delays. The proposed method is used for stability investigation of di...
Parameter-free resolution of the superposition of stochastic signals
Energy Technology Data Exchange (ETDEWEB)
Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)
2017-01-30
This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.
Computational singular perturbation analysis of stochastic chemical systems with stiffness
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.
2017-04-01
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm
Directory of Open Access Journals (Sweden)
V. D. Sulimov
2014-01-01
Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search
Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.
Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker
2009-02-21
Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.
Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian
2007-07-01
Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
International Nuclear Information System (INIS)
Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.
1975-01-01
A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)
Directory of Open Access Journals (Sweden)
Romanu Ekaterini
2006-01-01
Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.
Design of deterministic interleaver for turbo codes
International Nuclear Information System (INIS)
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Achieving control and synchronization merely through a stochastically adaptive feedback coupling
Lin, Wei; Chen, Xin; Zhou, Shijie
2017-07-01
Techniques of deterministically adaptive feedback couplings have been successfully and extensively applied to realize control or/and synchronization in chaotic dynamical systems and even in complex dynamical networks. In this article, a technique of stochastically adaptive feedback coupling is novelly proposed to not only realize control in chaotic dynamical systems but also achieve synchronization in unidirectionally coupled systems. Compared with those deterministically adaptive couplings, the proposed stochastic technique interestingly shows some advantages from a physical viewpoint of time and energy consumptions. More significantly, the usefulness of the proposed stochastic technique is analytically validated by the theory of stochastic processes. It is anticipated that the proposed stochastic technique will be widely used in achieving system control and network synchronization.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Aspects if stochastic models for short-term hydropower scheduling and bidding
Energy Technology Data Exchange (ETDEWEB)
Belsnes, Michael Martin [Sintef Energy, Trondheim (Norway); Follestad, Turid [Sintef Energy, Trondheim (Norway); Wolfgang, Ove [Sintef Energy, Trondheim (Norway); Fosso, Olav B. [Dep. of electric power engineering NTNU, Trondheim (Norway)
2012-07-01
This report discusses challenges met when turning from deterministic to stochastic decision support models for short-term hydropower scheduling and bidding. The report describes characteristics of the short-term scheduling and bidding problem, different market and bidding strategies, and how a stochastic optimization model can be formulated. A review of approaches for stochastic short-term modelling and stochastic modelling for the input variables inflow and market prices is given. The report discusses methods for approximating the predictive distribution of uncertain variables by scenario trees. Benefits of using a stochastic over a deterministic model are illustrated by a case study, where increased profit is obtained to a varying degree depending on the reservoir filling and price structure. Finally, an approach for assessing the effect of using a size restricted scenario tree to approximate the predictive distribution for stochastic input variables is described. The report is a summary of the findings of Work package 1 of the research project #Left Double Quotation Mark#Optimal short-term scheduling of wind and hydro resources#Right Double Quotation Mark#. The project aims at developing a prototype for an operational stochastic short-term scheduling model. Based on the investigations summarized in the report, it is concluded that using a deterministic equivalent formulation of the stochastic optimization problem is convenient and sufficient for obtaining a working prototype. (author)
Electricity price modeling with stochastic time change
International Nuclear Information System (INIS)
Borovkova, Svetlana; Schmeck, Maren Diane
2017-01-01
In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.
Topology optimization under stochastic stiffness
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations
Proving Non-Deterministic Computations in Agda
Directory of Open Access Journals (Sweden)
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
Stochastic control with rough paths
International Nuclear Information System (INIS)
Diehl, Joscha; Friz, Peter K.; Gassiat, Paul
2017-01-01
We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).
Stochastic control with rough paths
Energy Technology Data Exchange (ETDEWEB)
Diehl, Joscha [University of California San Diego (United States); Friz, Peter K., E-mail: friz@math.tu-berlin.de [TU & WIAS Berlin (Germany); Gassiat, Paul [CEREMADE, Université Paris-Dauphine, PSL Research University (France)
2017-04-15
We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).
Formal Abstractions for Automated Verification and Synthesis of Stochastic Systems
Esmaeil Zadeh Soudjani, S.
2014-01-01
Stochastic hybrid systems involve the coupling of discrete, continuous, and probabilistic phenomena, in which the composition of continuous and discrete variables captures the behavior of physical systems interacting with digital, computational devices. Because of their versatility and generality,
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics
International Nuclear Information System (INIS)
Maggs, J E; Morales, G J
2012-01-01
Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)
Stochastic resonance in models of neuronal ensembles
International Nuclear Information System (INIS)
Chialvo, D.R.; Longtin, A.; Mueller-Gerkin, J.
1997-01-01
Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the open-quotes resonance curveclose quotes in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. copyright 1997 The American Physical Society
Directory of Open Access Journals (Sweden)
Shaolin Ji
2013-01-01
Full Text Available This paper is devoted to a stochastic differential game (SDG of decoupled functional forward-backward stochastic differential equation (FBSDE. For our SDG, the associated upper and lower value functions of the SDG are defined through the solution of controlled functional backward stochastic differential equations (BSDEs. Applying the Girsanov transformation method introduced by Buckdahn and Li (2008, the upper and the lower value functions are shown to be deterministic. We also generalize the Hamilton-Jacobi-Bellman-Isaacs (HJBI equations to the path-dependent ones. By establishing the dynamic programming principal (DPP, we derive that the upper and the lower value functions are the viscosity solutions of the corresponding upper and the lower path-dependent HJBI equations, respectively.
Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.
Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar
2016-01-01
We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.
Hopf Bifurcation Analysis for a Stochastic Discrete-Time Hyperchaotic System
Directory of Open Access Journals (Sweden)
Jie Ran
2015-01-01
Full Text Available The dynamics of a discrete-time hyperchaotic system and the amplitude control of Hopf bifurcation for a stochastic discrete-time hyperchaotic system are investigated in this paper. Numerical simulations are presented to exhibit the complex dynamical behaviors in the discrete-time hyperchaotic system. Furthermore, the stochastic discrete-time hyperchaotic system with random parameters is transformed into its equivalent deterministic system with the orthogonal polynomial theory of discrete random function. In addition, the dynamical features of the discrete-time hyperchaotic system with random disturbances are obtained through its equivalent deterministic system. By using the Hopf bifurcation conditions of the deterministic discrete-time system, the specific conditions for the existence of Hopf bifurcation in the equivalent deterministic system are derived. And the amplitude control with random intensity is discussed in detail. Finally, the feasibility of the control method is demonstrated by numerical simulations.
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Lanchier, Nicolas
2017-01-01
Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...
STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING
International Nuclear Information System (INIS)
Johnson, Michael D.
2016-01-01
Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imaging come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.
STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING
Energy Technology Data Exchange (ETDEWEB)
Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2016-12-10
Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imaging come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.
Improved operating strategies for uranium extraction: a stochastic simulation
International Nuclear Information System (INIS)
Broekman, B.R.
1986-01-01
Deterministic and stochastic simulations of a Western Transvaal uranium process are used in this research report to determine more profitable uranium plant operating strategies and to gauge the potential financial benefits of automatic process control. The deterministic simulation model was formulated using empirical and phenomenological process models. The model indicated that profitability increases significantly as the uranium leaching strategy becomes harsher. The stochastic simulation models use process variable distributions corresponding to manually and automatically controlled conditions to investigate the economic gains that may be obtained if a change is made from manual to automatic control of two important process variables. These lognormally distributed variables are the pachuca 1 sulphuric acid concentration and the ferric to ferrous ratio. The stochastic simulations show that automatic process control is justifiable in certain cases. Where the leaching strategy is relatively harsh, such as that in operation during January 1986, it is not possible to justify an automatic control system. Automatic control is, however, justifiable if a relatively mild leaching strategy is adopted. The stochastic and deterministic simulations represent two different approaches to uranium process modelling. This study has indicated the necessity for each approach to be applied in the correct context. It is contended that incorrect conclusions may have been drawn by other investigators in South Africa who failed to consider the two approaches separately
Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture
Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong
The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.
Directory of Open Access Journals (Sweden)
Mourad Kerboua
2014-12-01
Full Text Available We introduce a new notion called fractional stochastic nonlocal condition, and then we study approximate controllability of class of fractional stochastic nonlinear differential equations of Sobolev type in Hilbert spaces. We use Hölder's inequality, fixed point technique, fractional calculus, stochastic analysis and methods adopted directly from deterministic control problems for the main results. A new set of sufficient conditions is formulated and proved for the fractional stochastic control system to be approximately controllable. An example is given to illustrate the abstract results.
The Asymptotic Behaviour of a Stochastic 3D LANS-α Model
International Nuclear Information System (INIS)
Caraballo, Tomas; Marquez-Duran, Antonio M.; Real, Jose
2006-01-01
The long-time behaviour of a stochastic 3D LANS-α model on a bounded domain is analysed. First, we reformulate the model as an abstract problem. Next, we establish sufficient conditions ensuring the existence of stationary (steady state) solutions of this abstract nonlinear stochastic evolution equation, and study the stability properties of the model. Finally, we analyse the effects produced by stochastic perturbations in the deterministic version of the system (persistence of exponential stability as well as possible stabilisation effects produced by the noise). The general results are applied to our stochastic LANS-α system throughout the paper
Noise-sustained fluctuations in stochastic dynamics with a delay.
D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2012-04-01
Delayed responses to external drivers are ubiquitous in environmental, social, and biological processes. Delays may induce oscillations, Hopf bifurcations, and instabilities in deterministic systems even in the absence of nonlinearities. Despite recent advances in the study of delayed stochastic differential equations, the interaction of random drivers with delays remains poorly understood. In particular, it is unclear whether noise-induced behaviors may emerge from these interactions. Here we show that noise may enhance and sustain transient periodic oscillations inherent to deterministic delayed systems. We investigate the conditions conducive to the emergence and disappearance of these dynamics in a linear system in the presence of both additive and multiplicative noise.
Dynamic analysis of a stochastic rumor propagation model
Jia, Fangju; Lv, Guangying
2018-01-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. In this paper, we are concerned with a stochastic rumor propagation model. Sufficient conditions for extinction and persistence in the mean of the rumor are established. The threshold between persistence in the mean and extinction of the rumor is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Energy Technology Data Exchange (ETDEWEB)
Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)
2006-12-18
A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.
The threshold of a stochastic delayed SIR epidemic model with temporary immunity
Liu, Qun; Chen, Qingmei; Jiang, Daqing
2016-05-01
This paper is concerned with the asymptotic properties of a stochastic delayed SIR epidemic model with temporary immunity. Sufficient conditions for extinction and persistence in the mean of the epidemic are established. The threshold between persistence in the mean and extinction of the epidemic is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
International Nuclear Information System (INIS)
Sgouros, George; Howell, R. W.; Bolch, Wesley E.; Fisher, Darrell R.
2009-01-01
The fundamental physical quantity for relating all biologic effects to radiation exposure is the absorbed dose, the energy imparted per unit mass of tissue. Absorbed dose is expressed in units of joules per kilogram (J/kg) and is given the special name gray (Gy). Exposure to ionizing radiation may cause both deterministic and stochastic biologic effects. To account for the relative effect per unit absorbed dose that has been observed for different types of radiation, the International Commission on Radiological Protection (ICRP) has established radiation weighting factors for stochastic effects. The product of absorbed dose in Gy and the radiation weighting factor is defined as the equivalent dose. Equivalent dose values are designated by a special named unit, the sievert (Sv). Unlike the situation for stochastic effects, no well-defined formalism and associated special named quantities have been widely adopted for deterministic effects. The therapeutic application of radionuclides and, specifically, -particle emitters in nuclear medicine has brought to the forefront the need for a well-defined dosimetry formalism applicable to deterministic effects that is accompanied by corresponding special named quantities. This commentary reviews recent proposals related to this issue and concludes with a recommendation to establish a new named quantity
Solving stochastic inflation for arbitrary potentials
International Nuclear Information System (INIS)
Martin, Jerome; Musso, Marcello
2006-01-01
A perturbative method for solving the Langevin equation of inflationary cosmology in the presence of backreaction is presented. In the Gaussian approximation, the method permits an explicit calculation of the probability distribution of the inflaton field for an arbitrary potential, with or without the volume effects taken into account. The perturbative method is then applied to various concrete models, namely, large field, small field, hybrid, and running mass inflation. New results on the stochastic behavior of the inflaton field in those models are obtained. In particular, it is confirmed that the stochastic effects can be important in new inflation while it is demonstrated they are negligible in (vacuum dominated) hybrid inflation. The case of stochastic running mass inflation is discussed in some details and it is argued that quantum effects blur the distinction between the four classical versions of this model. It is also shown that the self-reproducing regime is likely to be important in this case
Topological superposition of abstractions of stochastic processes
Bujorianu, L.M.; Bujorianu, M.C.
2008-01-01
In this paper, we present a sound integration mechanism for Markov processes that are abstractions of stochastic hybrid systems (SHS). In a previous work, we have defined a very general model of SHS and we proved that the realization of an SHS is a Markov process. Moreover, we have developed a
The threshold of a stochastic delayed SIR epidemic model with vaccination
Liu, Qun; Jiang, Daqing
2016-11-01
In this paper, we study the threshold dynamics of a stochastic delayed SIR epidemic model with vaccination. We obtain sufficient conditions for extinction and persistence in the mean of the epidemic. The threshold between persistence in the mean and extinction of the stochastic system is also obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number Rbar0 of the deterministic system. Results show that time delay has important effects on the persistence and extinction of the epidemic.
Stochastic bifurcation in a model of love with colored noise
Yue, Xiaokui; Dai, Honghua; Yuan, Jianping
2015-07-01
In this paper, we wish to examine the stochastic bifurcation induced by multiplicative Gaussian colored noise in a dynamical model of love where the random factor is used to describe the complexity and unpredictability of psychological systems. First, the dynamics in deterministic love-triangle model are considered briefly including equilibrium points and their stability, chaotic behaviors and chaotic attractors. Then, the influences of Gaussian colored noise with different parameters are explored such as the phase plots, top Lyapunov exponents, stationary probability density function (PDF) and stochastic bifurcation. The stochastic P-bifurcation through a qualitative change of the stationary PDF will be observed and bifurcation diagram on parameter plane of correlation time and noise intensity is presented to find the bifurcation behaviors in detail. Finally, the top Lyapunov exponent is computed to determine the D-bifurcation when the noise intensity achieves to a critical value. By comparison, we find there is no connection between two kinds of stochastic bifurcation.
Heuristic for Stochastic Online Flowshop Problem with Preemption Penalties
Directory of Open Access Journals (Sweden)
Mohammad Bayat
2013-01-01
Full Text Available The deterministic flowshop model is one of the most widely studied problems; whereas its stochastic equivalent has remained a challenge. Furthermore, the preemptive online stochastic flowshop problem has received much less attention, and most of the previous researches have considered a nonpreemptive version. Moreover, little attention has been devoted to the problems where a certain time penalty is incurred when preemption is allowed. This paper examines the preemptive stochastic online flowshop with the objective of minimizing the expected makespan. All the jobs arrive overtime, which means that the existence and the parameters of each job are unknown until its release date. The processing time of the jobs is stochastic and actual processing time is unknown until completion of the job. A heuristic procedure for this problem is presented, which is applicable whenever the job processing times are characterized by their means and standard deviation. The performance of the proposed heuristic method is explored using some numerical examples.
Stochastic Collocation Applications in Computational Electromagnetics
Directory of Open Access Journals (Sweden)
Dragan Poljak
2018-01-01
Full Text Available The paper reviews the application of deterministic-stochastic models in some areas of computational electromagnetics. Namely, in certain problems there is an uncertainty in the input data set as some properties of a system are partly or entirely unknown. Thus, a simple stochastic collocation (SC method is used to determine relevant statistics about given responses. The SC approach also provides the assessment of related confidence intervals in the set of calculated numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via SC method, is shown to be robust and efficient approach providing a satisfactory convergence rate. This review paper provides certain computational examples from the previous work by the authors illustrating successful application of SC technique in the areas of ground penetrating radar (GPR, human exposure to electromagnetic fields, and buried lines and grounding systems.
Stochastic congestion management in power markets using efficient scenario approaches
International Nuclear Information System (INIS)
Esmaili, Masoud; Amjady, Nima; Shayanfar, Heidar Ali
2010-01-01
Congestion management in electricity markets is traditionally performed using deterministic values of system parameters assuming a fixed network configuration. In this paper, a stochastic programming framework is proposed for congestion management considering the power system uncertainties comprising outage of generating units and transmission branches. The Forced Outage Rate of equipment is employed in the stochastic programming. Using the Monte Carlo simulation, possible scenarios of power system operating states are generated and a probability is assigned to each scenario. The performance of the ordinary as well as Lattice rank-1 and rank-2 Monte Carlo simulations is evaluated in the proposed congestion management framework. As a tradeoff between computation time and accuracy, scenario reduction based on the standard deviation of accepted scenarios is adopted. The stochastic congestion management solution is obtained by aggregating individual solutions of accepted scenarios. Congestion management using the proposed stochastic framework provides a more realistic solution compared with traditional deterministic solutions. Results of testing the proposed stochastic congestion management on the 24-bus reliability test system indicate the efficiency of the proposed framework.
The ISI distribution of the stochastic Hodgkin-Huxley neuron.
Rowat, Peter F; Greenwood, Priscilla E
2014-01-01
The simulation of ion-channel noise has an important role in computational neuroscience. In recent years several approximate methods of carrying out this simulation have been published, based on stochastic differential equations, and all giving slightly different results. The obvious, and essential, question is: which method is the most accurate and which is most computationally efficient? Here we make a contribution to the answer. We compare interspike interval histograms from simulated data using four different approximate stochastic differential equation (SDE) models of the stochastic Hodgkin-Huxley neuron, as well as the exact Markov chain model simulated by the Gillespie algorithm. One of the recent SDE models is the same as the Kurtz approximation first published in 1978. All the models considered give similar ISI histograms over a wide range of deterministic and stochastic input. Three features of these histograms are an initial peak, followed by one or more bumps, and then an exponential tail. We explore how these features depend on deterministic input and on level of channel noise, and explain the results using the stochastic dynamics of the model. We conclude with a rough ranking of the four SDE models with respect to the similarity of their ISI histograms to the histogram of the exact Markov chain model.
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Kulasiri, Don
2002-01-01
Most of the natural and biological phenomena such as solute transport in porous media exhibit variability which can not be modeled by using deterministic approaches. There is evidence in natural phenomena to suggest that some of the observations can not be explained by using the models which give deterministic solutions. Stochastic processes have a rich repository of objects which can be used to express the randomness inherent in the system and the evolution of the system over time. The attractiveness of the stochastic differential equations (SDE) and stochastic partial differential equations (SPDE) come from the fact that we can integrate the variability of the system along with the scientific knowledge pertaining to the system. One of the aims of this book is to explaim some useufl concepts in stochastic dynamics so that the scientists and engineers with a background in undergraduate differential calculus could appreciate the applicability and appropriateness of these developments in mathematics. The ideas ...
Backward Stochastic H2/H∞ Control: Infinite Horizon Case
Directory of Open Access Journals (Sweden)
Zhen Wu
2014-01-01
Full Text Available The mixed H2/H∞ control problem is studied for systems governed by infinite horizon backward stochastic differential equations (BSDEs with exogenous disturbance signal. A necessary and sufficient condition for the existence of a unique solution to the H2/H∞ control problem is derived. The equivalent feedback solution is also discussed. Contrary to deterministic or stochastic forward case, the feedback solution is no longer feedback of the current state; rather, it is feedback of the entire history of the state.
International Nuclear Information System (INIS)
Lahanas, M; Baltas, D; Zamboglou, N
2003-01-01
Multiple objectives must be considered in anatomy-based dose optimization for high-dose-rate brachytherapy and a large number of parameters must be optimized to satisfy often competing objectives. For objectives expressed solely in terms of dose variances, deterministic gradient-based algorithms can be applied and a weighted sum approach is able to produce a representative set of non-dominated solutions. As the number of objectives increases, or non-convex objectives are used, local minima can be present and deterministic or stochastic algorithms such as simulated annealing either cannot be used or are not efficient. In this case we employ a modified hybrid version of the multi-objective optimization algorithm NSGA-II. This, in combination with the deterministic optimization algorithm, produces a representative sample of the Pareto set. This algorithm can be used with any kind of objectives, including non-convex, and does not require artificial importance factors. A representation of the trade-off surface can be obtained with more than 1000 non-dominated solutions in 2-5 min. An analysis of the solutions provides information on the possibilities available using these objectives. Simple decision making tools allow the selection of a solution that provides a best fit for the clinical goals. We show an example with a prostate implant and compare results obtained by variance and dose-volume histogram (DVH) based objectives
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
International Nuclear Information System (INIS)
Wellens, Thomas; Shatokhin, Vyacheslav; Buchleitner, Andreas
2004-01-01
We are taught by conventional wisdom that the transmission and detection of signals is hindered by noise. However, during the last two decades, the paradigm of stochastic resonance (SR) proved this assertion wrong: indeed, addition of the appropriate amount of noise can boost a signal and hence facilitate its detection in a noisy environment. Due to its simplicity and robustness, SR has been implemented by mother nature on almost every scale, thus attracting interdisciplinary interest from physicists, geologists, engineers, biologists and medical doctors, who nowadays use it as an instrument for their specific purposes. At the present time, there exist a lot of diversified models of SR. Taking into account the progress achieved in both theoretical understanding and practical application of this phenomenon, we put the focus of the present review not on discussing in depth technical details of different models and approaches but rather on presenting a general and clear physical picture of SR on a pedagogical level. Particular emphasis will be given to the implementation of SR in generic quantum systems-an issue that has received limited attention in earlier review papers on the topic. The major part of our presentation relies on the two-state model of SR (or on simple variants thereof), which is general enough to exhibit the main features of SR and, in fact, covers many (if not most) of the examples of SR published so far. In order to highlight the diversity of the two-state model, we shall discuss several examples from such different fields as condensed matter, nonlinear and quantum optics and biophysics. Finally, we also discuss some situations that go beyond the generic SR scenario but are still characterized by a constructive role of noise
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
Energy Technology Data Exchange (ETDEWEB)
Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.
2018-03-28
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.
Numerical Solution of Stochastic Nonlinear Fractional Differential Equations
El-Beltagy, Mohamed A.; Al-Juhani, Amnah
2015-01-01
Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.
Numerical Solution of Stochastic Nonlinear Fractional Differential Equations
El-Beltagy, Mohamed A.
2015-01-07
Using Wiener-Hermite expansion (WHE) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. WHE is the only known expansion that handles the white/colored noise exactly. This work introduces a numerical estimation of the stochastic response of the Duffing oscillator with fractional or variable order damping and driven by white noise. The WHE technique is integrated with the Grunwald-Letnikov approximation in case of fractional order and with Coimbra approximation in case of variable-order damping. The numerical solver was tested with the analytic solution and with Monte-Carlo simulations. The developed mixed technique was shown to be efficient in simulating SPDEs.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Maximal stochastic transport in the Lorenz equations
Energy Technology Data Exchange (ETDEWEB)
Agarwal, Sahil, E-mail: sahil.agarwal@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Wettlaufer, J.S., E-mail: john.wettlaufer@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Departments of Geology & Geophysics, Mathematics and Physics, Yale University, New Haven (United States); Mathematical Institute, University of Oxford, Oxford (United Kingdom); Nordita, Royal Institute of Technology and Stockholm University, Stockholm (Sweden)
2016-01-08
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh–Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Studies to the stochastic theory of coupled reactorkinetic-thermohydraulic systems Pt. 2
International Nuclear Information System (INIS)
Mesko, L.
1983-06-01
The description is given of the noise phenomena taking place in a multivariable coupled system by a comprehensive model based on the theory of stochastic fluctuations. A comparison is made with models using transfer function formalism for systems characterized by deterministic open and closed loop signal transmission properties. The advantages of the stochastic model are illustrated by simple reactor dynamical examples having diagnostical importance. (author)
Asymptotic Behavior of a Chemostat Model with Stochastic Perturbation on the Dilution Rate
Directory of Open Access Journals (Sweden)
Chaoqun Xu
2013-01-01
Full Text Available We present a stochastic simple chemostat model in which the dilution rate was influenced by white noise. The long time behavior of the system is studied. Mainly, we show how the solution spirals around the washout equilibrium and the positive equilibrium of deterministic system under different conditions. Furthermore, the sufficient conditions for persistence in the mean of the stochastic system and washout of the microorganism are obtained. Numerical simulations are carried out to support our results.
Influence of wind energy forecast in deterministic and probabilistic sizing of reserves
Energy Technology Data Exchange (ETDEWEB)
Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico
2010-07-01
One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Time-ordered product expansions for computational stochastic system biology
International Nuclear Information System (INIS)
Mjolsness, Eric
2013-01-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie’s stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems. (paper)
Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process
Turner, Douglas C.; Ladde, Gangaram S.
2018-03-01
Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.
Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets
Directory of Open Access Journals (Sweden)
Mostafa eHerajy
2015-07-01
Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.
Spatial stochasticity and non-continuum effects in gas flows
Energy Technology Data Exchange (ETDEWEB)
Dadzie, S. Kokou, E-mail: k.dadzie@glyndwr.ac.uk [Mechanical and Aeronautical Engineering, Glyndwr University, Mold Road, Wrexham LL11 2AW (United Kingdom); Reese, Jason M., E-mail: jason.reese@strath.ac.uk [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)
2012-02-06
We investigate the relationship between spatial stochasticity and non-continuum effects in gas flows. A kinetic model for a dilute gas is developed using strictly a stochastic molecular model reasoning, without primarily referring to either the Liouville or the Boltzmann equations for dilute gases. The kinetic equation, a stochastic version of the well-known deterministic Boltzmann equation for dilute gas, is then associated with a set of macroscopic equations for the case of a monatomic gas. Tests based on a heat conduction configuration and sound wave dispersion show that spatial stochasticity can explain some non-continuum effects seen in gases. -- Highlights: ► We investigate effects of molecular spatial stochasticity in non-continuum regime. ► Present a simplify spatial stochastic kinetic equation. ► Present a spatial stochastic macroscopic flow equations. ► Show effects of the new model on sound wave dispersion prediction. ► Show effects of the new approach in density profiles in a heat conduction.
A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion
Directory of Open Access Journals (Sweden)
O. H. Galal
2013-01-01
Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.
Directory of Open Access Journals (Sweden)
Jie Yu
2015-01-01
Full Text Available Virtual power plant (VPP is an aggregation of multiple distributed generations, energy storage, and controllable loads. Affected by natural conditions, the uncontrollable distributed generations within VPP, such as wind and photovoltaic generations, are extremely random and relative. Considering the randomness and its correlation of uncontrollable distributed generations, this paper constructs the chance constraints stochastic optimal dispatch of VPP including stochastic variables and its random correlation. The probability distributions of independent wind and photovoltaic generations are described by empirical distribution functions, and their joint probability density model is established by Frank-copula function. And then, sample average approximation (SAA is applied to convert the chance constrained stochastic optimization model into a deterministic optimization model. Simulation cases are calculated based on the AIMMS. Simulation results of this paper mathematic model are compared with the results of deterministic optimization model without stochastic variables and stochastic optimization considering stochastic variables but not random correlation. Furthermore, this paper analyzes how SAA sampling frequency and the confidence level influence the results of stochastic optimization. The numerical example results show the effectiveness of the stochastic optimal dispatch of VPP considering the randomness and its correlations of distributed generations.
Stochastic tools in turbulence
Lumey, John L
2012-01-01
Stochastic Tools in Turbulence discusses the available mathematical tools to describe stochastic vector fields to solve problems related to these fields. The book deals with the needs of turbulence in relation to stochastic vector fields, particularly, on three-dimensional aspects, linear problems, and stochastic model building. The text describes probability distributions and densities, including Lebesgue integration, conditional probabilities, conditional expectations, statistical independence, lack of correlation. The book also explains the significance of the moments, the properties of the
Stochastic Levy Divergence and Maxwell's Equations
Directory of Open Access Journals (Sweden)
B. O. Volkov
2015-01-01
Full Text Available One of the main reasons for interest in the Levy Laplacian and its analogues such as Levy d'Alembertian is a connection of these operators with gauge fields. The theorem proved by Accardi, Gibillisco and Volovich stated that a connection in a bundle over a Euclidean space or over a Minkowski space is a solution of the Yang-Mills equations if and only if the corresponding parallel transport to the connection is a solution of the Laplace equation for the Levy Laplacian or of the d'Alembert equation for the Levy d'Alembertian respectively (see [5, 6]. There are two approaches to define Levy type operators, both of which date back to the original works of Levy [7]. The first is that the Levy Laplacian (or Levy d'Alembertian is defined as an integral functional generated by a special form of the second derivative. This approach is used in the works [5, 6], as well as in the paper [8] of Leandre and Volovich, where stochastic Levy-Laplacian is discussed. Another approach to the Levy Laplacian is defining it as the Cesaro mean of second order derivatives along the family of vectors, which is an orthonormal basis in the Hilbert space. This definition of the Levy Laplacian is used for the description of solutions of the Yang-Mills equations in the paper [10].The present work shows that the definitions of the Levy Laplacian and the Levy d'Alembertian based on Cesaro averaging of the second order directional derivatives can be transferred to the stochastic case. In the article the values of these operators on a stochastic parallel transport associated with a connection (vector potential are found. In this case, unlike the deterministic case and the stochastic case of Levy Laplacian from [8], these values are not equal to zero if the vector potential corresponding to the stochastic parallel transport is a solution of the Maxwell's equations. As a result, two approaches to definition of the Levy Laplacian in the stochastic case give different operators. This
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Diffusion in Deterministic Interacting Lattice Systems
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
A Deterministic Safety Assessment of a Pyro-processed Waste Repository
International Nuclear Information System (INIS)
Lee, Youn Myoung; Jeong, Jong Tae; Choi, Jong Won
2012-01-01
A GoldSim template program for a safety assessment of a hybrid-typed repository system, called 'A-KRS', in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been deterministically assessed with 5 various normal and abnormal scenarios associated with nuclide release and transport in and around the repository. Dose exposure rates to the farming exposure group have been evaluated in accordance with all the scenarios and then compared among other.
Stochastic Gabor reflectivity and acoustic impedance inversion
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
, obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.
Universal resources for approximate and stochastic measurement-based quantum computation
International Nuclear Information System (INIS)
Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.
2010-01-01
We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high
Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics
Bressloff, Paul C.
2010-01-01
We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N→∞, where N determines the size of each population, the dynamics is described by deterministic Wilson-Cowan equations. On the other hand
Directory of Open Access Journals (Sweden)
John A. D. Appleby
2008-01-01
Full Text Available This paper considers necessary and sufficient conditions for the solution of a stochastically and deterministically perturbed Volterra equation to converge exponentially to a nonequilibrium and nontrivial limit. Convergence in an almost sure and pth mean sense is obtained.
International Nuclear Information System (INIS)
Frank, T D
2005-01-01
Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)
The Limit Behavior of a Stochastic Logistic Model with Individual Time-Dependent Rates
Directory of Open Access Journals (Sweden)
Yilun Shang
2013-01-01
Full Text Available We investigate a variant of the stochastic logistic model that allows individual variation and time-dependent infection and recovery rates. The model is described as a heterogeneous density dependent Markov chain. We show that the process can be approximated by a deterministic process defined by an integral equation as the population size grows.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Ogawa, Shigeyoshi
2017-01-01
This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...
Safety margins in deterministic safety analysis
International Nuclear Information System (INIS)
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Design of deterministic OS for SPLC
International Nuclear Information System (INIS)
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop
2012-01-01
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
A hybrid modelling approach to simulating foot-and-mouth disease outbreaks in Australian livestock
Directory of Open Access Journals (Sweden)
Richard A Bradhurst
2015-03-01
Full Text Available Foot-and-mouth disease (FMD is a highly contagious and economically important viral disease of cloven-hoofed animals. Australia's freedom from FMD underpins a valuable trade in live animals and animal products. An outbreak of FMD would result in the loss of export markets and cause severe disruption to domestic markets. The prevention of, and contingency planning for, FMD are of key importance to government, industry, producers and the community. The spread and control of FMD is complex and dynamic due to a highly contagious multi-host pathogen operating in a heterogeneous environment across multiple jurisdictions. Epidemiological modelling is increasingly being recognized as a valuable tool for investigating the spread of disease under different conditions and the effectiveness of control strategies. Models of infectious disease can be broadly classified as: population-based models that are formulated from the top-down and employ population-level relationships to describe individual-level behaviour, individual-based models that are formulated from the bottom-up and aggregate individual-level behaviour to reveal population-level relationships, or hybrid models which combine the two approaches into a single model.The Australian Animal Disease Spread (AADIS hybrid model employs a deterministic equation-based model (EBM to model within-herd spread of FMD, and a stochastic, spatially-explicit agent-based model (ABM to model between-herd spread and control. The EBM provides concise and computationally efficient predictions of herd prevalence and clinical signs over time. The ABM captures the complex, stochastic and heterogeneous environment in which an FMD epidemic operates. The AADIS event-driven hybrid EBM/ABM architecture is a flexible, efficient and extensible framework for modelling the spread and control of disease in livestock on a national scale. We present an overview of the AADIS hybrid approach and a description of the model
Susceptibility of optimal train schedules to stochastic disturbances of process times
DEFF Research Database (Denmark)
Larsen, Rune; Pranzo, Marco; D’Ariano, Andrea
2013-01-01
study, an advanced branch and bound algorithm, on average, outperforms a First In First Out scheduling rule both in deterministic and stochastic traffic scenarios. However, the characteristic of the stochastic processes and the way a stochastic instance is handled turn out to have a serious impact...... and dwell times). In fact, the objective of railway traffic management is to reduce delay propagation and to increase disturbance robustness of train schedules at a network scale. We present a quantitative study of traffic disturbances and their effects on the schedules computed by simple and advanced...
The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties
Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.
2007-06-01
We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.
Directory of Open Access Journals (Sweden)
Mingzhu Song
2016-01-01
Full Text Available We address the problem of globally asymptotic stability for a class of stochastic nonlinear systems with time-varying delays. By the backstepping method and Lyapunov theory, we design a linear output feedback controller recursively based on the observable linearization for a class of stochastic nonlinear systems with time-varying delays to guarantee that the closed-loop system is globally asymptotically stable in probability. In particular, we extend the deterministic nonlinear system to stochastic nonlinear systems with time-varying delays. Finally, an example and its simulations are given to illustrate the theoretical results.
International Nuclear Information System (INIS)
Dreimann, Karsten; Linz, Stefan J.
2010-01-01
Graphical abstract: Deterministic surface pattern (left) and its stochastic counterpart (right) arising in a stochastic damped Kuramoto-Sivashinsky equation that serves as a model equation for ion-beam eroded surfaces and is systematically investigated. - Abstract: Using a recently proposed field equation for the surface evolution of ion-beam eroded semiconductor target materials under normal incidence, we systematically explore the impact of additive stochastic fluctuations that are permanently present during the erosion process. Specifically, we investigate the dependence of the surface roughness, the underlying pattern forming properties and the bifurcation behavior on the strength of the fluctuations.
Directory of Open Access Journals (Sweden)
Eleni Bekri
2015-11-01
Full Text Available Optimal water allocation within a river basin still remains a great modeling challenge for engineers due to various hydrosystem complexities, parameter uncertainties and their interactions. Conventional deterministic optimization approaches have given their place to stochastic, fuzzy and interval-parameter programming approaches and their hybrid combinations for overcoming these difficulties. In many countries, including Mediterranean countries, water resources management is characterized by uncertain, imprecise and limited data because of the absence of permanent measuring systems, inefficient river monitoring and fragmentation of authority responsibilities. A fuzzy-boundary-interval linear programming methodology developed by Li et al. (2010 is selected and applied in the Alfeios river basin (Greece for optimal water allocation under uncertain system conditions. This methodology combines an ordinary multi-stage stochastic programming with uncertainties expressed as fuzzy-boundary intervals. Upper- and lower-bound solution intervals for optimized water allocation targets and probabilistic water allocations and shortages are estimated under a baseline scenario and four water and agricultural policy future scenarios for an optimistic and a pessimistic attitude of the decision makers. In this work, the uncertainty of the random water inflows is incorporated through the simultaneous generation of stochastic equal-probability hydrologic scenarios at various inflow positions instead of using a scenario-tree approach in the original methodology.
Importance of vesicle release stochasticity in neuro-spike communication.
Ramezani, Hamideh; Akan, Ozgur B
2017-07-01
Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.
The critical domain size of stochastic population models.
Reimer, Jody R; Bonsall, Michael B; Maini, Philip K
2017-02-01
Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta [University of California, Berkeley, Berkeley, California 94720 (United States)
2016-07-28
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr{sub 2} molecule. We demonstrate for systems like Cr{sub 2} that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C{sub 2}.
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation
Directory of Open Access Journals (Sweden)
Lin Chao
2016-01-01
Full Text Available Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington
International Nuclear Information System (INIS)
Billaud, Y; Kaiss, A; Drissi, M; Pizzo, Y; Porterie, B; Zekri, N; Acem, Z; Collin, A; Boulet, P; Santoni, P-A; Bosseur, F
2012-01-01
This paper presents the latest developments and validation results of a hybrid model which combines a broad-scale stochastic small-world network model with a macroscopic deterministic approach, to simulate the effects of large fires burning in heterogeneous landscapes. In the extended version of the model, vegetation is depicted as an amorphous network of combustible cells, and both radiation and convection from the flaming zone are considered in the preheating process of unburned cells. Examples are given to illustrate small-world effects and fire behavior near the percolation threshold. The model is applied to a Mediterranean fire that occurred in Corsica in 2009 showing a good agreement in terms of rate of spread, and area and shape of the burn. A study, based on a fractional factorial plan, is conducted to evaluate the influence of variations of model parameters on fire propagation.
Zachar, István; Fedor, Anna; Szathmáry, Eörs
2011-01-01
The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix. PMID:21818258
Zachar, István; Fedor, Anna; Szathmáry, Eörs
2011-01-01
The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.
Directory of Open Access Journals (Sweden)
István Zachar
Full Text Available The simulation of complex biochemical systems, consisting of intertwined subsystems, is a challenging task in computational biology. The complex biochemical organization of the cell is effectively modeled by the minimal cell model called chemoton, proposed by Gánti. Since the chemoton is a system consisting of a large but fixed number of interacting molecular species, it can effectively be implemented in a process algebra-based language such as the BlenX programming language. The stochastic model behaves comparably to previous continuous deterministic models of the chemoton. Additionally to the well-known chemoton, we also implemented an extended version with two competing template cycles. The new insight from our study is that the coupling of reactions in the chemoton ensures that these templates coexist providing an alternative solution to Eigen's paradox. Our technical innovation involves the introduction of a two-state switch to control cell growth and division, thus providing an example for hybrid methods in BlenX. Further developments to the BlenX language are suggested in the Appendix.
Modeling stochasticity and robustness in gene regulatory networks.
Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis
2009-06-15
Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Modeling stochastic frontier based on vine copulas
Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito
2017-11-01
This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.
Some considerations on stochastic neutron populations (u)
International Nuclear Information System (INIS)
Souto, Francisco J.; Prinja, Anil K.
2010-01-01
The neutron population in a multiplying body containing a weak random source may depart considerably from its average or expected value. The resulting behavior of the system is then unpredictable and a fully stochastic description of the neutron population becomes necessary. Stochastic considerations are especially important when dealing with pulsed reactors or in the case of criticality excursions in the presence of a weak source. Using the theory of discrete-state continuous-time Markov processes, and subject to some physical approximations, Bell (I) obtained approximate solutions for the neutron number probability distributions (pdf), with and without an intrinsic rapdom neutron source, that were valid at late times and/ large neutron populations. In recent work (4), we obtained exact solutions for Bell's model problem, and in this paper we use these exact probability distributions to: (I) assess the accuracy of Bell's asymptotic solutions and show how the latter follow from the exact solutions, (2) rigorously examine the probability of obtaining a divergent chain reaction, and (3) demonstrate the existence of an abrupt transition from a stochastic to a deterministic phase with increasing source strength.