The interpolation method of stochastic functions and the stochastic variational principle
International Nuclear Information System (INIS)
Liu Xianbin; Chen Qiu
1993-01-01
Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second
Microscopic description of nuclear few-body systems with the stochastic variational method
International Nuclear Information System (INIS)
Suzuki, Yasuyuki
2000-01-01
A simple gambling procedure called the stochastic variational method can be applied, together with appropriate variational trial functions, to solve a few-body system where the correlation between the constituents plays an important role in determining its structure. The usefulness of the method is tested by comparing to other accurate solutions for Coulombic systems. Examples of application shown here include few-nucleon systems interacting with realistic forces and few-cluster systems with the Pauli principle being taken into account properly. These examples confirm the power of the stochastic variational method. There still remain many problems for extending to a system consisting of more particles. (author)
Application of Stochastic variational method with correlated Ground States to coulombic systems
Energy Technology Data Exchange (ETDEWEB)
Usukura, Junko; Suzuki, Yasuyuki [Niigata Univ. (Japan); Varga, K.
1998-07-01
Positronium molecule, Ps{sub 2} has not been found experimentally yet, and it has been believed theoretically that Ps{sub 2} has only one bound state with L = 0. We predicted the existence of new bound state of Ps{sub 2}, which is the excited state with L = 1 and comes from Pauli principle, by Stochastic variational method. There are two decay mode with respect to Ps{sub 2}(P); one is pair annihilation and another is electric dipole (E1) transition to the ground state. While it is difficult to tell {gamma}-ray caused by annihilation of Ps{sub 2} from that of Ps since both of them have same energy, Energy (4.94 eV) of the photon emitted in E1 transition is specific enough to distinguish from other spectra. Then the excited state is one of clues to observe Ps{sub 2}. (author)
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
Cruzeiro, Ana; Holm, Darryl
2017-01-01
Collecting together contributed lectures and mini-courses, this book details the research presented in a special semester titled “Geometric mechanics – variational and stochastic methods” run in the first half of 2015 at the Centre Interfacultaire Bernoulli (CIB) of the Ecole Polytechnique Fédérale de Lausanne. The aim of the semester was to develop a common language needed to handle the wide variety of problems and phenomena occurring in stochastic geometric mechanics. It gathered mathematicians and scientists from several different areas of mathematics (from analysis, probability, numerical analysis and statistics, to algebra, geometry, topology, representation theory, and dynamical systems theory) and also areas of mathematical physics, control theory, robotics, and the life sciences, with the aim of developing the new research area in a concentrated joint effort, both from the theoretical and applied points of view. The lectures were given by leading specialists in different areas of mathematics a...
Schillinger, Dominik; Stefanov, Dimitar; Stavrev, Atanas
2013-01-01
-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed
Application of the stochastic variational method to the calculation of 3α- and 4α-systems
International Nuclear Information System (INIS)
Kukulin, V.I.; Krasnopol'skii, V.M.
1975-01-01
The results of calculations of the properties of 3α- and 4α-systems carried out within the framework of the recently suggested stochastic variational method are presented. As the α-α potentials, two different types of potentials are used: the Ali-Bodmer repulsive core potential and the deep attractive α-α potential with forbidden states. In the latter case the pseudopotential approach we have earlier suggested is used. The energies of levels, (r 2 ) and form-factors of the ground state F(q 2 ) are calculated
Stochastic quantisation: theme and variation
International Nuclear Information System (INIS)
Klauder, J.R.; Kyoto Univ.
1987-01-01
The paper on stochastic quantisation is a contribution to the book commemorating the sixtieth birthday of E.S. Fradkin. Stochastic quantisation reformulates Euclidean quantum field theory in the language of Langevin equations. The generalised free field is discussed from the viewpoint of stochastic quantisation. An artificial family of highly singular model theories wherein the space-time derivatives are dropped altogether is also examined. Finally a modified form of stochastic quantisation is considered. (U.K.)
Schillinger, Dominik
2013-07-01
The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.
Kallianpur, Gopinath; Hida, Takeyuki
1987-01-01
The use of probabilistic methods in the biological sciences has been so well established by now that mathematical biology is regarded by many as a distinct dis cipline with its own repertoire of techniques. The purpose of the Workshop on sto chastic methods in biology held at Nagoya University during the week of July 8-12, 1985, was to enable biologists and probabilists from Japan and the U. S. to discuss the latest developments in their respective fields and to exchange ideas on the ap plicability of the more recent developments in stochastic process theory to problems in biology. Eighteen papers were presented at the Workshop and have been grouped under the following headings: I. Population genetics (five papers) II. Measure valued diffusion processes related to population genetics (three papers) III. Neurophysiology (two papers) IV. Fluctuation in living cells (two papers) V. Mathematical methods related to other problems in biology, epidemiology, population dynamics, etc. (six papers) An important f...
STOCHASTIC METHODS IN RISK ANALYSIS
Directory of Open Access Journals (Sweden)
Vladimíra OSADSKÁ
2017-06-01
Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.
Stochastic methods in quantum mechanics
Gudder, Stanley P
2005-01-01
Practical developments in such fields as optical coherence, communication engineering, and laser technology have developed from the applications of stochastic methods. This introductory survey offers a broad view of some of the most useful stochastic methods and techniques in quantum physics, functional analysis, probability theory, communications, and electrical engineering. Starting with a history of quantum mechanics, it examines both the quantum logic approach and the operational approach, with explorations of random fields and quantum field theory.The text assumes a basic knowledge of fun
Stochastic Generalized Method of Moments
Yin, Guosheng; Ma, Yanyuan; Liang, Faming; Yuan, Ying
2011-01-01
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Stochastic Generalized Method of Moments
Yin, Guosheng
2011-08-16
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Stochastic variational approach to minimum uncertainty states
Energy Technology Data Exchange (ETDEWEB)
Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)
1995-05-21
We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)
STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
Nataša Krejić
2014-12-01
Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Statistical Methods for Stochastic Differential Equations
Kessler, Mathieu; Sorensen, Michael
2012-01-01
The seventh volume in the SemStat series, Statistical Methods for Stochastic Differential Equations presents current research trends and recent developments in statistical methods for stochastic differential equations. Written to be accessible to both new students and seasoned researchers, each self-contained chapter starts with introductions to the topic at hand and builds gradually towards discussing recent research. The book covers Wiener-driven equations as well as stochastic differential equations with jumps, including continuous-time ARMA processes and COGARCH processes. It presents a sp
Fastest Rates for Stochastic Mirror Descent Methods
Hanzely, Filip
2018-03-20
Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.
Fastest Rates for Stochastic Mirror Descent Methods
Hanzely, Filip; Richtarik, Peter
2018-01-01
Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.
Stochastic development regression using method of moments
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....
Doubly stochastic radial basis function methods
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
Stochastic Variational Learning in Recurrent Spiking Networks
Directory of Open Access Journals (Sweden)
Danilo eJimenez Rezende
2014-04-01
Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Stochastic variational learning in recurrent spiking networks.
Jimenez Rezende, Danilo; Gerstner, Wulfram
2014-01-01
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Compressible cavitation with stochastic field method
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
Loizou, Nicolas
2017-12-27
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
Ranking Decision Making Units with Stochastic Data by Using Coefficient of Variation
Lotfi, F.; Nematollahi, N.; Behzadi, M.H.; Mirbolouki, M.
2010-01-01
Data Envelopment Analysis (DEA) is a non-parametric technique which is based on mathematical programming for evaluating the efficiency of a set of Decision Making Units (DMUs). Throughout applications, managers encounter with stochastic data and the necessity of having a method that is able to evaluate efficiency and rank efficient units has been under consideration. In this paper considering the concept of coefficient of variation among efficient DMUs, two ranking methods has been proposed. ...
Computational Methods in Stochastic Dynamics Volume 2
Stefanou, George; Papadopoulos, Vissarion
2013-01-01
The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology. This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...
Joseph, Bindu; Corwin, Jason A.; Kliebenstein, Daniel J.
2015-01-01
Recent studies are starting to show that genetic control over stochastic variation is a key evolutionary solution of single celled organisms in the face of unpredictable environments. This has been expanded to show that genetic variation can alter stochastic variation in transcriptional processes within multi-cellular eukaryotes. However, little is known about how genetic diversity can control stochastic variation within more non-cell autonomous phenotypes. Using an Arabidopsis reciprocal RIL population, we showed that there is significant genetic diversity influencing stochastic variation in the plant metabolome, defense chemistry, and growth. This genetic diversity included loci specific for the stochastic variation of each phenotypic class that did not affect the other phenotypic classes or the average phenotype. This suggests that the organism's networks are established so that noise can exist in one phenotypic level like metabolism and not permeate up or down to different phenotypic levels. Further, the genomic variation within the plastid and mitochondria also had significant effects on the stochastic variation of all phenotypic classes. The genetic influence over stochastic variation within the metabolome was highly metabolite specific, with neighboring metabolites in the same metabolic pathway frequently showing different levels of noise. As expected from bet-hedging theory, there was more genetic diversity and a wider range of stochastic variation for defense chemistry than found for primary metabolism. Thus, it is possible to begin dissecting the stochastic variation of whole organismal phenotypes in multi-cellular organisms. Further, there are loci that modulate stochastic variation at different phenotypic levels. Finding the identity of these genes will be key to developing complete models linking genotype to phenotype. PMID:25569687
Essays on variational approximation techniques for stochastic optimization problems
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence
Learning-based stochastic object models for characterizing anatomical variations
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
The stochastic energy-Casimir method
Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.
2018-04-01
In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"
Conformable variational iteration method
Directory of Open Access Journals (Sweden)
Omer Acan
2017-02-01
Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
A stochastic method for computing hadronic matrix elements
Energy Technology Data Exchange (ETDEWEB)
Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration
2013-02-15
We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.
Numerical methods for stochastic partial differential equations with white noise
Zhang, Zhongqiang
2017-01-01
This book covers numerical methods for stochastic partial differential equations with white noise using the framework of Wong-Zakai approximation. The book begins with some motivational and background material in the introductory chapters and is divided into three parts. Part I covers numerical stochastic ordinary differential equations. Here the authors start with numerical methods for SDEs with delay using the Wong-Zakai approximation and finite difference in time. Part II covers temporal white noise. Here the authors consider SPDEs as PDEs driven by white noise, where discretization of white noise (Brownian motion) leads to PDEs with smooth noise, which can then be treated by numerical methods for PDEs. In this part, recursive algorithms based on Wiener chaos expansion and stochastic collocation methods are presented for linear stochastic advection-diffusion-reaction equations. In addition, stochastic Euler equations are exploited as an application of stochastic collocation methods, where a numerical compa...
Insights into pre-reversal paleosecular variation from stochastic models
Directory of Open Access Journals (Sweden)
Klaudio ePeqini
2015-09-01
Full Text Available To provide insights on the paleosecular variation of the geomagnetic field and the mechanism of reversals, long time series of the dipolar magnetic moment are generated by two different stochastic models, known as the domino model and the inhomogeneous Lebovitz disk dynamo model, with initial values taken from the from paleomagnetic data. The former model considers mutual interactions of N macrospins embedded in a uniformly rotating medium, where random forcing and dissipation act on each macrospin. With an appropriate set of the model’s parameters values, the series generated by this model have similar statistical behaviour to the time series of the SHA.DIF.14K model. The latter model is an extension of the classical two-disk Rikitake model, considering N dynamo elements with appropriate interactions between them.We varied the parameters set of both models aiming at generating suitable time series with behaviour similar to the long time series of recent secular variation (SV. Such series are then extended to the near future, obtaining reversals in both cases of models. The analysis of the time series generated by simulating the models show that the reversals appears after a persistent period of low intensity geomagnetic field, as it is occurring in the present times.
Problems of Mathematical Finance by Stochastic Control Methods
Stettner, Łukasz
The purpose of this paper is to present main ideas of mathematics of finance using the stochastic control methods. There is an interplay between stochastic control and mathematics of finance. On the one hand stochastic control is a powerful tool to study financial problems. On the other hand financial applications have stimulated development in several research subareas of stochastic control in the last two decades. We start with pricing of financial derivatives and modeling of asset prices, studying the conditions for the absence of arbitrage. Then we consider pricing of defaultable contingent claims. Investments in bonds lead us to the term structure modeling problems. Special attention is devoted to historical static portfolio analysis called Markowitz theory. We also briefly sketch dynamic portfolio problems using viscosity solutions to Hamilton-Jacobi-Bellman equation, martingale-convex analysis method or stochastic maximum principle together with backward stochastic differential equation. Finally, long time portfolio analysis for both risk neutral and risk sensitive functionals is introduced.
Variational and potential formulation for stochastic partial differential equations
International Nuclear Information System (INIS)
Munoz S, A G; Ojeda, J; Sierra D, P; Soldovieri, T
2006-01-01
Recently there has been interest in finding a potential formulation for stochastic partial differential equations (SPDEs). The rationale behind this idea lies in obtaining all the dynamical information of the system under study from one single expression. In this letter we formally provide a general Lagrangian formalism for SPDEs using the Hojman et al method. We show that it is possible to write the corresponding effective potential starting from an s-equivalent Lagrangian, and that this potential is able to reproduce all the dynamics of the system once a special differential operator has been applied. This procedure can be used to study the complete time evolution and spatial inhomogeneities of the system under consideration, and is also suitable for the statistical mechanics description of the problem. (letter to the editor)
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
Directory of Open Access Journals (Sweden)
WANG Yupu
2016-06-01
Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.
Energy Technology Data Exchange (ETDEWEB)
Chorošajev, Vladimir [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Gelzinis, Andrius; Valkunas, Leonas [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Department of Molecular Compound Physics, Center for Physical Sciences and Technology, Sauletekio 3, 10222 Vilnius (Lithuania); Abramavicius, Darius, E-mail: darius.abramavicius@ff.vu.lt [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania)
2016-12-20
Highlights: • The Davydov ansatze can be used for finite temperature simulations with an extension. • The accuracy is high if the system is strongly coupled to the environmental phonons. • The approach can simulate time-resolved fluorescence spectra. - Abstract: Time dependent variational approach is a convenient method to characterize the excitation dynamics in molecular aggregates for different strengths of system-bath interaction a, which does not require any additional perturbative schemes. Until recently, however, this method was only applicable in zero temperature case. It has become possible to extend this method for finite temperatures with the introduction of stochastic time dependent variational approach. Here we present a comparison between this approach and the exact hierarchical equations of motion approach for describing excitation dynamics in a broad range of temperatures. We calculate electronic population evolution, absorption and auxiliary time resolved fluorescence spectra in different regimes and find that the stochastic approach shows excellent agreement with the exact approach when the system-bath coupling is sufficiently large and temperatures are high. The differences between the two methods are larger, when temperatures are lower or the system-bath coupling is small.
Stochastic process variation in deep-submicron CMOS circuits and algorithms
Zjajo, Amir
2014-01-01
One of the most notable features of nanometer scale CMOS technology is the increasing magnitude of variability of the key device parameters affecting performance of integrated circuits. The growth of variability can be attributed to multiple factors, including the difficulty of manufacturing control, the emergence of new systematic variation-generating mechanisms, and most importantly, the increase in atomic-scale randomness, where device operation must be described as a stochastic process. In addition to wide-sense stationary stochastic device variability and temperature variation, existence of non-stationary stochastic electrical noise associated with fundamental processes in integrated-circuit devices represents an elementary limit on the performance of electronic circuits. In an attempt to address these issues, Stochastic Process Variation in Deep-Submicron CMOS: Circuits and Algorithms offers unique combination of mathematical treatment of random process variation, electrical noise and temperature and ne...
DEFF Research Database (Denmark)
Finlay, Chris; Olsen, Nils; Gillet, Nicolas
We present a new ensemble of time-dependent magnetic field models constructed from satellite and observatory data spanning 1997-2013 that are compatible with prior information concerning the temporal spectrum of core field variations. These models allow sharper field changes compared to tradition...... physical hypotheses can be tested by asking questions of the entire ensemble of core field models, rather than by interpreting any single model.......We present a new ensemble of time-dependent magnetic field models constructed from satellite and observatory data spanning 1997-2013 that are compatible with prior information concerning the temporal spectrum of core field variations. These models allow sharper field changes compared to traditional...... regularization methods based on minimizing the square of second or third time derivative. We invert satellite and observatory data directly by adopting the external field and crustal field modelling framework of the CHAOS model, but apply the stochastic process method of Gillet et al. (2013) to the core field...
SYSTEMATIC AND STOCHASTIC VARIATIONS IN PULSAR DISPERSION MEASURES
International Nuclear Information System (INIS)
Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Jones, M. L.; McLaughlin, M. A.; Armstrong, J. W.
2016-01-01
We analyze deterministic and random temporal variations in the dispersion measure (DM) from the full three-dimensional velocities of pulsars with respect to the solar system, combined with electron-density variations over a wide range of length scales. Previous treatments have largely ignored pulsars’ changing distances while favoring interpretations involving changes in sky position from transverse motion. Linear trends in pulsar DMs observed over 5–10 year timescales may signify sizable DM gradients in the interstellar medium (ISM) sampled by the changing direction of the line of sight to the pulsar. We show that motions parallel to the line of sight can also account for linear trends, for the apparent excess of DM variance over that extrapolated from scintillation measurements, and for the apparent non-Kolmogorov scalings of DM structure functions inferred in some cases. Pulsar motions through atomic gas may produce bow-shock ionized gas that also contributes to DM variations. We discuss the possible causes of periodic or quasi-periodic changes in DM, including seasonal changes in the ionosphere, annual variations of the solar elongation angle, structure in the heliosphere and ISM boundary, and substructure in the ISM. We assess the solar cycle’s role on the amplitude of ionospheric and solar wind variations. Interstellar refraction can produce cyclic timing variations from the error in transforming arrival times to the solar system barycenter. We apply our methods to DM time series and DM gradient measurements in the literature and assess their consistency with a Kolmogorov medium. Finally, we discuss the implications of DM modeling in precision pulsar timing experiments
Schroedinger's variational method of quantization revisited
International Nuclear Information System (INIS)
Yasue, K.
1980-01-01
Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)
Directory of Open Access Journals (Sweden)
Shaolin Ji
2013-01-01
Full Text Available This paper is devoted to a stochastic differential game (SDG of decoupled functional forward-backward stochastic differential equation (FBSDE. For our SDG, the associated upper and lower value functions of the SDG are defined through the solution of controlled functional backward stochastic differential equations (BSDEs. Applying the Girsanov transformation method introduced by Buckdahn and Li (2008, the upper and the lower value functions are shown to be deterministic. We also generalize the Hamilton-Jacobi-Bellman-Isaacs (HJBI equations to the path-dependent ones. By establishing the dynamic programming principal (DPP, we derive that the upper and the lower value functions are the viscosity solutions of the corresponding upper and the lower path-dependent HJBI equations, respectively.
Application of Stochastic Sensitivity Analysis to Integrated Force Method
Directory of Open Access Journals (Sweden)
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-09-19
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-01-01
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo
2017-07-01
The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Energy Technology Data Exchange (ETDEWEB)
Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)
2016-08-07
Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the
Stochastic seismic floor response analysis method for various damping systems
International Nuclear Information System (INIS)
Kitada, Y.; Hattori, K.; Ogata, M.; Kanda, J.
1991-01-01
A study using the stochastic seismic response analysis method which is applicable for the estimation of floor response spectra is carried out. It is pointed out as a shortcoming in this stochastic seismic response analysis method, that the method tends to overestimate floor response spectra for low damping systems, e.g. 1% of the critical damping ratio. An investigation on the cause of the shortcoming is carried out and a number of improvements in this method were also made to the original method by taking correlation of successive peaks in a response time history into account. The application of the improved method to a typical BWR reactor building is carried out. The resultant floor response spectra are compared with those obtained by deterministic time history analysis. Floor response spectra estimated by the improved method consistently cover the response spectra obtained by the time history analysis for various damping ratios. (orig.)
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Energy Technology Data Exchange (ETDEWEB)
Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Variational linear algebraic equations method
International Nuclear Information System (INIS)
Moiseiwitsch, B.L.
1982-01-01
A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Stochastic Spectral and Conjugate Descent Methods
Kovalev, Dmitry
2018-02-11
The state-of-the-art methods for solving optimization problems in big dimensions are variants of randomized coordinate descent (RCD). In this paper we introduce a fundamentally new type of acceleration strategy for RCD based on the augmentation of the set of coordinate directions by a few spectral or conjugate directions. As we increase the number of extra directions to be sampled from, the rate of the method improves, and interpolates between the linear rate of RCD and a linear rate independent of the condition number. We develop and analyze also inexact variants of these methods where the spectral and conjugate directions are allowed to be approximate only. We motivate the above development by proving several negative results which highlight the limitations of RCD with importance sampling.
Stochastic Spectral and Conjugate Descent Methods
Kovalev, Dmitry; Gorbunov, Eduard; Gasanov, Elnur; Richtarik, Peter
2018-01-01
The state-of-the-art methods for solving optimization problems in big dimensions are variants of randomized coordinate descent (RCD). In this paper we introduce a fundamentally new type of acceleration strategy for RCD based on the augmentation of the set of coordinate directions by a few spectral or conjugate directions. As we increase the number of extra directions to be sampled from, the rate of the method improves, and interpolates between the linear rate of RCD and a linear rate independent of the condition number. We develop and analyze also inexact variants of these methods where the spectral and conjugate directions are allowed to be approximate only. We motivate the above development by proving several negative results which highlight the limitations of RCD with importance sampling.
Bä ck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2010-01-01
Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods
Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes
Helbing, Dirk
2010-01-01
This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...
Stochastic numerical methods an introduction for students and scientists
Toral, Raul
2014-01-01
Stochastic Numerical Methods introduces at Master level the numerical methods that use probability or stochastic concepts to analyze random processes. The book aims at being rather general and is addressed at students of natural sciences (Physics, Chemistry, Mathematics, Biology, etc.) and Engineering, but also social sciences (Economy, Sociology, etc.) where some of the techniques have been used recently to numerically simulate different agent-based models. Examples included in the book range from phase-transitions and critical phenomena, including details of data analysis (extraction of critical exponents, finite-size effects, etc.), to population dynamics, interfacial growth, chemical reactions, etc. Program listings are integrated in the discussion of numerical algorithms to facilitate their understanding. From the contents: Review of Probability ConceptsMonte Carlo IntegrationGeneration of Uniform and Non-uniformRandom Numbers: Non-correlated ValuesDynamical MethodsApplications to Statistical MechanicsIn...
Improved stochastic approximation methods for discretized parabolic partial differential equations
Guiaş, Flavius
2016-12-01
We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).
Quantitative sociodynamics stochastic methods and models of social interaction processes
Helbing, Dirk
1995-01-01
Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...
Variational methods in molecular modeling
2017-01-01
This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Efficient decomposition and linearization methods for the stochastic transportation problem
International Nuclear Information System (INIS)
Holmberg, K.
1993-01-01
The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)
Linking numbers and variational method
International Nuclear Information System (INIS)
Oda, I.; Yahikozawa, S.
1989-09-01
The ordinary and generalized linking numbers for two surfaces of dimension p and n-p-1 in an n dimensional manifold are derived. We use a variational method based on the properties of topological quantum field theory in order to derive them. (author). 13 refs, 2 figs
Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods
Bhatnagar, S; Prashanth, L A
2013-01-01
Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...
Stochastic methods for the fermion determinant in lattice quantum chromodynamics
Energy Technology Data Exchange (ETDEWEB)
Finkenrath, Jacob Friedrich
2015-02-17
In this thesis, algorithms in lattice quantum chromodynamics are presented by developing and using stochastic methods for fermion determinant ratios. For that an integral representation is proved which can be used also for non hermitian matrices. The stochastic estimation or the Monte Carlo integration of this integral representation introduces stochastic fluctuations which are controlled by using Domain Decomposition of the Dirac operator and introducing interpolation techniques. Determinant ratios of the lattice fermion operator, here the Wilson Dirac operator, are needed for corrections of the Boltzmann weight. These corrections have interesting applications e.g. in the mass by using mass reweighting. It will be shown that mass reweighting can be used e.g. to improve extrapolation in the light quark mass towards the chiral or physical point or to introduce an isospin breaking by splitting up the mass of the light quark. Furthermore the extraction of the light quark masses will be shown by using dynamical 2 flavor CLS ensembles. Stochastic estimation of determinant ratios can be used in Monte Carlo algorithms, e.g. in the Partial Stochastic Multi Step algorithm which can sample two mass-degenerate quarks. The idea is to propose a new configuration weighted by the pure gauge weight and including afterwards the fermion weight by using Metropolis accept-reject steps. It is shown by using an adequate interpolation with relative gauge fixing and a hierarchical filter structure that it is possible to simulate moderate lattices up to (2.1 fm){sup 4}. Furthermore the iteration of the pure gauge update can be increased which can decouple long autocorrelation times from the weighting with the fermions. Moreover a novel Hybrid Monte Carlo algorithm based on Domain Decomposition and combined with mass reweighting is presented. By using Domain Decomposition it is possible to split up the mass term in the Schur complement and the block operators. By introducing a higher mass
Modeling bias and variation in the stochastic processes of small RNA sequencing.
Argyropoulos, Christos; Etheridge, Alton; Sakhanenko, Nikita; Galas, David
2017-06-20
The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
An h-adaptive stochastic collocation method for stochastic EMC/EMI analysis
Yücel, Abdulkadir C.
2010-07-01
The analysis of electromagnetic compatibility and interference (EMC/EMI) phenomena is often fraught by randomness in a system\\'s excitation (e.g., the amplitude, phase, and location of internal noise sources) or configuration (e.g., the routing of cables, the placement of electronic systems, component specifications, etc.). To bound the probability of system malfunction, fast and accurate techniques to quantify the uncertainty in system observables (e.g., voltages across mission-critical circuit elements) are called for. Recently proposed stochastic frameworks [1-2] combine deterministic electromagnetic (EM) simulators with stochastic collocation (SC) methods that approximate system observables using generalized polynomial chaos expansion (gPC) [3] (viz. orthogonal polynomials spanning the entire random domain) to estimate their statistical moments and probability density functions (pdfs). When constructing gPC expansions, the EM simulator is used solely to evaluate system observables at collocation points prescribed by the SC-gPC scheme. The frameworks in [1-2] therefore are non-intrusive and straightforward to implement. That said, they become inefficient and inaccurate for system observables that vary rapidly or are discontinuous in the random variables (as their representations may require very high-order polynomials). © 2010 IEEE.
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.
2010-08-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.; Oden, J. Tinsley
2010-01-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
Symplectic Integrators to Stochastic Hamiltonian Dynamical Systems Derived from Composition Methods
Directory of Open Access Journals (Sweden)
Tetsuya Misawa
2010-01-01
Full Text Available “Symplectic” schemes for stochastic Hamiltonian dynamical systems are formulated through “composition methods (or operator splitting methods” proposed by Misawa (2001. In the proposed methods, a symplectic map, which is given by the solution of a stochastic Hamiltonian system, is approximated by composition of the stochastic flows derived from simpler Hamiltonian vector fields. The global error orders of the numerical schemes derived from the stochastic composition methods are provided. To examine the superiority of the new schemes, some illustrative numerical simulations on the basis of the proposed schemes are carried out for a stochastic harmonic oscillator system.
A comparative study of two stochastic mode reduction methods
Energy Technology Data Exchange (ETDEWEB)
Stinis, Panagiotis
2005-09-01
We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.
Natural tracer test simulation by stochastic particle tracking method
International Nuclear Information System (INIS)
Ackerer, P.; Mose, R.; Semra, K.
1990-01-01
Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)
Different seeds to solve the equations of stochastic point kinetics using the Euler-Maruyama method
International Nuclear Information System (INIS)
Suescun D, D.; Oviedo T, M.
2017-09-01
In this paper, a numerical study of stochastic differential equations that describe the kinetics in a nuclear reactor is presented. These equations, known as the stochastic equations of punctual kinetics they model temporal variations in neutron population density and concentrations of deferred neutron precursors. Because these equations are probabilistic in nature (since random oscillations in the neutrons and population of precursors were considered to be approximately normally distributed, and these equations also possess strong coupling and stiffness properties) the proposed method for the numerical simulations is the Euler-Maruyama scheme that provides very good approximations for calculating the neutron population and concentrations of deferred neutron precursors. The method proposed for this work was computationally tested for different seeds, initial conditions, experimental data and forms of reactivity for a group of precursors and then for six groups of deferred neutron precursors at each time step with 5000 Brownian movements per seed. In a paper reported in the literature, the Euler-Maruyama method was proposed, but there are many doubts about the reported values, in addition to not reporting the seed used, so in this work is expected to rectify the reported values. After taking the average of the different seeds used to generate the pseudo-random numbers the results provided by the Euler-Maruyama scheme will be compared in mean and standard deviation with other methods reported in the literature and results of the deterministic model of the equations of the punctual kinetics. This comparison confirms in particular that the Euler-Maruyama scheme is an efficient method to solve the equations of stochastic point kinetics but different from the values found and reported by another author. The Euler-Maruyama method is simple and easy to implement, provides acceptable results for neutron population density and concentration of deferred neutron precursors and
Methods for solving the stochastic point reactor kinetic equations
International Nuclear Information System (INIS)
Quabili, E.R.; Karasulu, M.
1979-01-01
Two new methods are presented for analysis of the statistical properties of nonlinear outputs of a point reactor to stochastic non-white reactivity inputs. They are Bourret's approximation and logarithmic linearization. The results have been compared with the exact results, previously obtained in the case of Gaussian white reactivity input. It was found that when the reactivity noise has short correlation time, Bourret's approximation should be recommended because it yields results superior to those yielded by logarithmic linearization. When the correlation time is long, Bourret's approximation is not valid, but in that case, if one can assume the reactivity noise to be Gaussian, one may use the logarithmic linearization. (author)
Analytic continuation of quantum Monte Carlo data. Stochastic sampling method
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)
2016-07-01
We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Dimension Reduction and Discretization in Stochastic Problems by Regression Method
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
1996-01-01
The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation, ...
International Nuclear Information System (INIS)
Ge, Gen; Li, ZePeng
2016-01-01
A modified stochastic averaging method on single-degree-of-freedom (SDOF) oscillators under white noise excitations with strongly nonlinearity was proposed. Considering the existing approach dealing with strongly nonlinear SDOFs derived by Zhu and Huang [14, 15] is quite time consuming in calculating the drift coefficient and diffusion coefficients and the expressions of them are considerable long, the so-called He's energy balance method was applied to overcome the minor defect of the Zhu and Huang's method. The modified method can offer more concise approximate expressions of the drift and diffusion coefficients without weakening the accuracy of predicting the responses of the systems too much by giving an averaged frequency beforehand. Three examples, a cubic and quadratic nonlinearity coexisting oscillator, a quadratic nonlinear oscillator under external white noise excitations and an externally excited Duffing–Rayleigh oscillator, were given to illustrate the approach we proposed. The three examples were excited by the Gaussian white noise and the Gaussian colored noise separately. The stationary responses of probability density of amplitudes and energy, together with joint probability density of displacement and velocity are studied to verify the presented approach. The reliability of the systems were also investigated to offer further support. Digital simulations were carried out and the output of that are coincide with the theoretical approximations well.
Simple method to generate and fabricate stochastic porous scaffolds
Energy Technology Data Exchange (ETDEWEB)
Yang, Nan, E-mail: y79nzw@163.com; Gao, Lilan; Zhou, Kuntao
2015-11-01
Considerable effort has been made to generate regular porous structures (RPSs) using function-based methods, although little effort has been made for constructing stochastic porous structures (SPSs) using the same methods. In this short communication, we propose a straightforward method for SPS construction that is simple in terms of methodology and the operations used. Using our method, we can obtain a SPS with functionally graded, heterogeneous and interconnected pores, target pore size and porosity distributions, which are useful for applications in tissue engineering. The resulting SPS models can be directly fabricated using additive manufacturing (AM) techniques. - Highlights: • Random porous structures are constructed based on their regular counterparts. • Functionally graded random pores can be constructed easily. • The scaffolds can be directly fabricated using additive manufacturing techniques.
Stochastic rainfall synthesis for urban applications using different regionalization methods
Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.
2017-12-01
The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.
Approaching complexity by stochastic methods: From biological systems to turbulence
Energy Technology Data Exchange (ETDEWEB)
Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)
2011-09-15
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Approaching complexity by stochastic methods: From biological systems to turbulence
International Nuclear Information System (INIS)
Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2011-01-01
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.
Variational methods for field theories
Energy Technology Data Exchange (ETDEWEB)
Ben-Menahem, S.
1986-09-01
Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.
Local Approximation and Hierarchical Methods for Stochastic Optimization
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu [Department of Mathematics and Statistics, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250 (United States); Elman, Howard C., E-mail: elman@cs.umd.edu [Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 (United States)
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmark problems.
Stochastic weighted particle methods for population balance equations
International Nuclear Information System (INIS)
Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus
2011-01-01
Highlights: → Weight transfer functions for Monte Carlo simulation of coagulation. → Efficient support for single-particle growth processes. → Comparisons to analytic solutions and soot formation problems. → Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.
A multi-stage stochastic transmission expansion planning method
International Nuclear Information System (INIS)
Akbari, Tohid; Rahimikian, Ashkan; Kazemi, Ahad
2011-01-01
Highlights: → We model a multi-stage stochastic transmission expansion planning problem. → We include available transfer capability (ATC) in our model. → Involving this criterion will increase the ATC between source and sink points. → Power system reliability will be increased and more money can be saved. - Abstract: This paper presents a multi-stage stochastic model for short-term transmission expansion planning considering the available transfer capability (ATC). The ATC can have a huge impact on the power market outcomes and the power system reliability. The transmission expansion planning (TEP) studies deal with many uncertainties, such as system load uncertainties that are considered in this paper. The Monte Carlo simulation method has been applied for generating different scenarios. A scenario reduction technique is used for reducing the number of scenarios. The objective is to minimize the sum of investment costs (IC) and the expected operation costs (OC). The solution technique is based on the benders decomposition algorithm. The N-1 contingency analysis is also done for the TEP problem. The proposed model is applied to the IEEE 24 bus reliability test system and the results are efficient and promising.
Fields Institute International Symposium on Asymptotic Methods in Stochastics
Kulik, Rafal; Haye, Mohamedou; Szyszkowicz, Barbara; Zhao, Yiqiang
2015-01-01
This book contains articles arising from a conference in honour of mathematician-statistician Miklόs Csörgő on the occasion of his 80th birthday, held in Ottawa in July 2012. It comprises research papers and overview articles, which provide a substantial glimpse of the history and state-of-the-art of the field of asymptotic methods in probability and statistics, written by leading experts. The volume consists of twenty articles on topics on limit theorems for self-normalized processes, planar processes, the central limit theorem and laws of large numbers, change-point problems, short and long range dependent time series, applied probability and stochastic processes, and the theory and methods of statistics. It also includes Csörgő’s list of publications during more than 50 years, since 1962.
Muldowney, Patrick
2012-01-01
A Modern Theory of Random Variation is a new and radical re-formulation of the mathematical underpinnings of subjects as diverse as investment, communication engineering, and quantum mechanics. Setting aside the classical theory of probability measure spaces, the book utilizes a mathematically rigorous version of the theory of random variation that bases itself exclusively on finitely additive probability distribution functions. In place of twentieth century Lebesgue integration and measure theory, the author uses the simpler concept of Riemann sums, and the non-absolute Riemann-type integration of Henstock. Readers are supplied with an accessible approach to standard elements of probability theory such as the central limmit theorem and Brownian motion as well as remarkable, new results on Feynman diagrams and stochastic integrals. Throughout the book, detailed numerical demonstrations accompany the discussions of abstract mathematical theory, from the simplest elements of the subject to the most complex. I...
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro
2016-01-01
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
Reliability-Based Shape Optimization using Stochastic Finite Element Methods
DEFF Research Database (Denmark)
Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.
1991-01-01
stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
Directory of Open Access Journals (Sweden)
Xiaolin Zhu
2014-01-01
Full Text Available This paper studies the T-stability of the Heun method and balanced method for solving stochastic differential delay equations (SDDEs. Two T-stable conditions of the Heun method are obtained for two kinds of linear SDDEs. Moreover, two conditions under which the balanced method is T-stable are obtained for two kinds of linear SDDEs. Some numerical examples verify the theoretical results proposed.
Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.
2014-01-01
In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.
Stochastic Industrial Source Detection Using Lower Cost Methods
Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.
2017-12-01
Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.
Vibrations And Deformations Of Moderately Thick Plates In Stochastic Finite Element Method
Directory of Open Access Journals (Sweden)
Grzywiński Maksym
2015-12-01
Full Text Available The paper deals with some chosen aspects of stochastic dynamical analysis of moderately thick plates. The discretization of the governing equations is described by the finite element method. The main aim of the study is to provide the generalized stochastic perturbation technique based on classical Taylor expansion with a single random variable.
Stochastic Unit Commitment via Progressive Hedging - Extensive Analysis of Solution Methods
DEFF Research Database (Denmark)
Ordoudis, Christos; Pinson, Pierre; Zugno, Marco
2015-01-01
Owing to the massive deployment of renewable power production units over the last couple of decades, the use of stochastic optimization methods to solve the unit commitment problem has gained increasing attention. Solving stochastic unit commitment problems in large-scale power systems requires h...
Nonperturbative stochastic method for driven spin-boson model
Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn
2013-01-01
We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.
A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data
Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping
2013-01-01
large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate
Stochastic fractional differential equations: Modeling, method and analysis
International Nuclear Information System (INIS)
Pedjeu, Jean-C.; Ladde, Gangaram S.
2012-01-01
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model described by a system of multi-time scale stochastic differential equations is formulated. The classical Picard–Lindelöf successive approximations scheme is applied to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this leads to the problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations of Itô–Doob type. Finally, to illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are outlined.
CISM course on stochastic methods in fluid mechanics
Chibbaro, Sergio
2013-01-01
Since their first introduction in natural sciences through the work of Einstein on Brownian motion in 1905 and further works, in particular by Langevin, Smoluchowski and others, stochastic processes have been used in several areas of science and technology. For example, they have been applied in chemical studies, or in fluid turbulence and for combustion and reactive flows. The articles in this book provide a general and unified framework in which stochastic processes are presented as modeling tools for various issues in engineering, physics and chemistry, with particular focus on fluid mechan
International Nuclear Information System (INIS)
Song Lina; Zhang Hongqing
2007-01-01
In this work, by means of a generalized method and symbolic computation, we extend the Jacobi elliptic function rational expansion method to uniformly construct a series of stochastic wave solutions for stochastic evolution equations. To illustrate the effectiveness of our method, we take the (2+1)-dimensional stochastic dispersive long wave system as an example. We not only have obtained some known solutions, but also have constructed some new rational formal stochastic Jacobi elliptic function solutions.
Research on stochastic power-flow study methods. Final report
Energy Technology Data Exchange (ETDEWEB)
Heydt, G. T. [ed.
1981-01-01
A general algorithm to determine the effects of uncertainty in bus load and generation on the output of conventional power flow analysis is presented. The use of statistical moments is presented and developed as a means for representing the stochastic process. Statistical moments are used to describe the uncertainties, and facilitate the calculations of single and multivarlate probability density functions of input and output variables. The transformation of the uncertainty through the power flow equations is made by the expansion of the node equations in a multivariate Taylor series about an expected operating point. The series is truncated after the second order terms. Since the power flow equations are nonlinear, the expected values of output quantities is in general not the solution to the conventional load flow problem using expected values of input quantities. The second order transformation offers a correction vector and allows the consideration of larger uncertainties which have caused significant error in the current linear transformation algorithms. Voltage controlled busses are included with consideration of upper and lower limits. The finite reactive power available at generation sites, and fixed ranges of transformer tap movement may have a significant effect on voltage and line power flow statistics. A method is given which considers limitation constraints in the evaluation of all output quantities. The bus voltages, line power flows, transformer taps, and generator reactive power requirements are described by their statistical moments. Their values are expressed in terms of the probability that they are above or below specified limits, and their expected values given that they do fall outside the limits. Thus the algorithm supplies information about severity of overload as well as probability of occurrence. An example is given for an eleven bus system, evaluating each quantity separately. The results are compared with Monte Carlo simulation.
Ground state of the electron gas by a stochastic method
International Nuclear Information System (INIS)
Ceperley, D.M.; Alder, B.J.
1980-05-01
An exact stochastic simulation of the Schroedinger equation for charged Bosons and Fermions was used to calculate the correlation energies, to locate the transitions to their respective crystal phases at zero temperature within 10%, and to establish the stability at intermediate densities of a ferromagnetic fluid of electrons
The adjoint variational nodal method
International Nuclear Information System (INIS)
Laurin-Kovitz, K.; Lewis, E.E.
1993-01-01
The widespread use of nodal methods for reactor core calculations in both diffusion and transport approximations has created a demand for the corresponding adjoint solutions as a prerequisite for performing perturbation calculations. With some computational methods, however, the solution of the adjoint problem presents a difficulty; the physical adjoint obtained by discretizing the adjoint equation is not the same as the mathematical adjoint obtained by taking the transpose of the coefficient matrix, which results from the discretization of the forward equation. This difficulty arises, in particular, when interface current nodal methods based on quasi-one-dimensional solution of the diffusion or transport equation are employed. The mathematical adjoint is needed to perform perturbation calculations. The utilization of existing nodal computational algorithms, however, requires the physical adjoint. As a result, similarity transforms or related techniques must be utilized to relate physical and mathematical adjoints. Thus far, such techniques have been developed only for diffusion theory
Martin-Ruiz, Carmen; Saretzki, Gabriele; Petrie, Joanne; Ladhoff, Juliane; Jeyapalan, Jessie; Wei, Wenyi; Sedivy, John; von Zglinicki, Thomas
2004-04-23
The replicative life span of human fibroblasts is heterogeneous, with a fraction of cells senescing at every population doubling. To find out whether this heterogeneity is due to premature senescence, i.e. driven by a nontelomeric mechanism, fibroblasts with a senescent phenotype were isolated from growing cultures and clones by flow cytometry. These senescent cells had shorter telomeres than their cycling counterparts at all population doubling levels and both in mass cultures and in individual subclones, indicating heterogeneity in the rate of telomere shortening. Ectopic expression of telomerase stabilized telomere length in the majority of cells and rescued them from early senescence, suggesting a causal role of telomere shortening. Under standard cell culture conditions, there was a minor fraction of cells that showed a senescent phenotype and short telomeres despite active telomerase. This fraction increased under chronic mild oxidative stress, which is known to accelerate telomere shortening. It is possible that even high telomerase activity cannot fully compensate for telomere shortening in all cells. The data show that heterogeneity of the human fibroblast replicative life span can be caused by significant stochastic cell-to-cell variation in telomere shortening.
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad; Nobile, Fabio; Tempone, Raul
2012-01-01
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical
Variational method for integrating radial gradient field
Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo
2014-12-01
We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Stochastic methods for the description of multiparticle production
International Nuclear Information System (INIS)
Carruthers, P.
1984-01-01
Dynamical questions in the evolution of excited hadronic matter are reviewed, with emphasis on KNO scaling and its possible violation. It is suggested that the KNO distributions is described by a stochastic evolution of the Fokker-Planck type related to underlying field theory by coupled rate equations approximated by Langevin equations with noise. Refined correlation analysis of data, especially the use of intensity interferometry techniques, is recommended for data analysis. 26 references
High Weak Order Methods for Stochastic Differential Equations Based on Modified Equations
Abdulle, Assyr
2012-01-01
© 2012 Society for Industrial and Applied Mathematics. Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (meansquare stable) stochastic problems, and implicit integrators that exactly conserve all quadratic first integrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Analysis of future nuclear power plants competitiveness with stochastic methods
International Nuclear Information System (INIS)
Feretic, D.; Tomsic, Z.
2004-01-01
To satisfy the increased demand it is necessary to build new electrical power plants, which could in an optimal way meet, the imposed acceptability criteria. The main criteria are potential to supply the required energy, to supply this energy with minimal (or at least acceptable) costs, to satisfy licensing requirements and be acceptable to public. The main competitors for unlimited electricity production in next few decades are fossil power plants (coal and gas) and nuclear power plants. New renewable power plants (solar, wind, biomass) are also important but due to limited energy supply potential and high costs can be only supplement to the main generating units. Large hydropower plans would be competitive under condition of existence of suitable sites for construction of such plants. The paper describes the application of a stochastic method for comparing economic parameters of future electrical power generating systems including conventional and nuclear power plants. The method is applied to establish competitive specific investment costs of future nuclear power plants when compared with combined cycle gas fired units combined with wind electricity generators using best estimated and optimistic input data. The bases for economic comparison of potential options are plant life time levelized electricity generating costs. The purpose is to assess the uncertainty of several key performance and cost of electricity produced in coal fired power plant, gas fired power plant and nuclear power plant developing probability distribution of levelized price of electricity from different Power Plants, cumulative probability of levelized price of electricity for each technology and probability distribution of cost difference between the technologies. The key parameters evaluated include: levelized electrical energy cost USD/kWh,, discount rate, interest rate for credit repayment, rate of expected increase of fuel cost, plant investment cost , fuel cost , constant annual
Stability of numerical method for semi-linear stochastic pantograph differential equations
Directory of Open Access Journals (Sweden)
Yu Zhang
2016-01-01
Full Text Available Abstract As a particular expression of stochastic delay differential equations, stochastic pantograph differential equations have been widely used in nonlinear dynamics, quantum mechanics, and electrodynamics. In this paper, we mainly study the stability of analytical solutions and numerical solutions of semi-linear stochastic pantograph differential equations. Some suitable conditions for the mean-square stability of an analytical solution are obtained. Then we proved the general mean-square stability of the exponential Euler method for a numerical solution of semi-linear stochastic pantograph differential equations, that is, if an analytical solution is stable, then the exponential Euler method applied to the system is mean-square stable for arbitrary step-size h > 0 $h>0$ . Numerical examples further illustrate the obtained theoretical results.
Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations
Abdulle, Assyr
2013-01-01
We introduce a new family of explicit integrators for stiff Itô stochastic differential equations (SDEs) of weak order two. These numerical methods belong to the class of one-step stabilized methods with extended stability domains and do not suffer from the step size reduction faced by standard explicit methods. The family is based on the standard second order orthogonal Runge-Kutta-Chebyshev (ROCK2) methods for deterministic problems. The convergence, meansquare, and asymptotic stability properties of the methods are analyzed. Numerical experiments, including applications to nonlinear SDEs and parabolic stochastic partial differential equations are presented and confirm the theoretical results. © 2013 Society for Industrial and Applied Mathematics.
Population control methods in stochastic extinction and outbreak scenarios.
Directory of Open Access Journals (Sweden)
Juan Segura
Full Text Available Adaptive limiter control (ALC and adaptive threshold harvesting (ATH are two related control methods that have been shown to stabilize fluctuating populations. Large variations in population abundance can threaten the constancy and the persistence stability of ecological populations, which may impede the success and efficiency of managing natural resources. Here, we consider population models that include biological mechanisms characteristic for causing extinctions on the one hand and pest outbreaks on the other hand. These models include Allee effects and the impact of natural enemies (as is typical of forest defoliating insects. We study the impacts of noise and different levels of biological parameters in three extinction and two outbreak scenarios. Our results show that ALC and ATH have an effect on extinction and outbreak risks only for sufficiently large control intensities. Moreover, there is a clear disparity between the two control methods: in the extinction scenarios, ALC can be effective and ATH can be counterproductive, whereas in the outbreak scenarios the situation is reversed, with ATH being effective and ALC being potentially counterproductive.
Population control methods in stochastic extinction and outbreak scenarios.
Segura, Juan; Hilker, Frank M; Franco, Daniel
2017-01-01
Adaptive limiter control (ALC) and adaptive threshold harvesting (ATH) are two related control methods that have been shown to stabilize fluctuating populations. Large variations in population abundance can threaten the constancy and the persistence stability of ecological populations, which may impede the success and efficiency of managing natural resources. Here, we consider population models that include biological mechanisms characteristic for causing extinctions on the one hand and pest outbreaks on the other hand. These models include Allee effects and the impact of natural enemies (as is typical of forest defoliating insects). We study the impacts of noise and different levels of biological parameters in three extinction and two outbreak scenarios. Our results show that ALC and ATH have an effect on extinction and outbreak risks only for sufficiently large control intensities. Moreover, there is a clear disparity between the two control methods: in the extinction scenarios, ALC can be effective and ATH can be counterproductive, whereas in the outbreak scenarios the situation is reversed, with ATH being effective and ALC being potentially counterproductive.
Stochastic Eulerian Lagrangian methods for fluid-structure interactions with thermal fluctuations
International Nuclear Information System (INIS)
Atzberger, Paul J.
2011-01-01
We present approaches for the study of fluid-structure interactions subject to thermal fluctuations. A mixed mechanical description is utilized combining Eulerian and Lagrangian reference frames. We establish general conditions for operators coupling these descriptions. Stochastic driving fields for the formalism are derived using principles from statistical mechanics. The stochastic differential equations of the formalism are found to exhibit significant stiffness in some physical regimes. To cope with this issue, we derive reduced stochastic differential equations for several physical regimes. We also present stochastic numerical methods for each regime to approximate the fluid-structure dynamics and to generate efficiently the required stochastic driving fields. To validate the methodology in each regime, we perform analysis of the invariant probability distribution of the stochastic dynamics of the fluid-structure formalism. We compare this analysis with results from statistical mechanics. To further demonstrate the applicability of the methodology, we perform computational studies for spherical particles having translational and rotational degrees of freedom. We compare these studies with results from fluid mechanics. The presented approach provides for fluid-structure systems a set of rather general computational methods for treating consistently structure mechanics, hydrodynamic coupling, and thermal fluctuations.
Directory of Open Access Journals (Sweden)
Chen Shi
2014-01-01
Full Text Available Subsynchronous oscillation (SSO usually caused by series compensation, power system stabilizer (PSS, high voltage direct current transmission (HVDC and other power electronic equipment, which will affect the safe operation of generator shafting even the system. It is very important to identify the modal parameters of SSO to take effective control strategies as well. Since the identification accuracy of traditional methods are not high enough, the stochastic subspace identification (SSI method is proposed to improve the identification accuracy of subsynchronous oscillation modal. The stochastic subspace identification method was compared with the other two methods on subsynchronous oscillation IEEE benchmark model and Xiang-Shang HVDC system model, the simulation results show that the stochastic subspace identification method has the advantages of high identification precision, high operation efficiency and strong ability of anti-noise.
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts
Heterogeneous treatment in the variational nodal method
International Nuclear Information System (INIS)
Fanning, T.H.
1995-01-01
The variational nodal transport method is reduced to its diffusion form and generalized for the treatment of heterogeneous nodes while maintaining nodal balances. Adapting variational methods to heterogeneous nodes requires the ability to integrate over a node with discontinuous cross sections. In this work, integrals are evaluated using composite gaussian quadrature rules, which permit accurate integration while minimizing computing time. Allowing structure within a nodal solution scheme avoids some of the necessity of cross section homogenization, and more accurately defines the intra-nodal flux shape. Ideally, any desired heterogeneity can be constructed within the node; but in reality, the finite set of basis functions limits the practical resolution to which fine detail can be defined within the node. Preliminary comparison tests show that the heterogeneous variational nodal method provides satisfactory results even if some improvements are needed for very difficult, configurations
A variational synthesis nodal discrete ordinates method
International Nuclear Information System (INIS)
Favorite, J.A.; Stacey, W.M.
1999-01-01
A self-consistent nodal approximation method for computing discrete ordinates neutron flux distributions has been developed from a variational functional for neutron transport theory. The advantage of the new nodal method formulation is that it is self-consistent in its definition of the homogenized nodal parameters, the construction of the global nodal equations, and the reconstruction of the detailed flux distribution. The efficacy of the method is demonstrated by two-dimensional test problems
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
International Nuclear Information System (INIS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2017-01-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)
Fluctuating dynamics of nematic liquid crystals using the stochastic method of lines
Bhattacharjee, A. K.; Menon, Gautam I.; Adhikari, R.
2010-07-01
We construct Langevin equations describing the fluctuations of the tensor order parameter Qαβ in nematic liquid crystals by adding noise terms to time-dependent variational equations that follow from the Ginzburg-Landau-de Gennes free energy. The noise is required to preserve the symmetry and tracelessness of the tensor order parameter and must satisfy a fluctuation-dissipation relation at thermal equilibrium. We construct a noise with these properties in a basis of symmetric traceless matrices and show that the Langevin equations can be solved numerically in this basis using a stochastic version of the method of lines. The numerical method is validated by comparing equilibrium probability distributions, structure factors, and dynamic correlations obtained from these numerical solutions with analytic predictions. We demonstrate excellent agreement between numerics and theory. This methodology can be applied to the study of phenomena where fluctuations in both the magnitude and direction of nematic order are important, as for instance, in the nematic swarms which produce enhanced opalescence near the isotropic-nematic transition or the problem of nucleation of the nematic from the isotropic phase.
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
A stochastic Galerkin method for the Euler equations with Roe variable transformation
Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan
2014-01-01
The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.
Weak Second Order Explicit Stabilized Methods for Stiff Stochastic Differential Equations
Abdulle, Assyr; Vilmart, Gilles; Zygalakis, Konstantinos C.
2013-01-01
We introduce a new family of explicit integrators for stiff Itô stochastic differential equations (SDEs) of weak order two. These numerical methods belong to the class of one-step stabilized methods with extended stability domains and do not suffer
The variational celular method - the code implantation
International Nuclear Information System (INIS)
Rosato, A.; Lima, M.A.P.
1980-12-01
The process to determine the potential energy curve for diatomic molecules by the Variational Cellular Method is discussed. An analysis of the determination of the electronic eigenenergies and the electrostatic energy of these molecules is made. An explanation of the input data and their meaning is also presented. (Author) [pt
Variational method for lattice spectroscopy with ghosts
International Nuclear Information System (INIS)
Burch, Tommy; Hagen, Christian; Gattringer, Christof; Glozman, Leonid Ya.; Lang, C.B.
2006-01-01
We discuss the variational method used in lattice spectroscopy calculations. In particular we address the role of ghost contributions which appear in quenched or partially quenched simulations and have a nonstandard euclidean time dependence. We show that the ghosts can be separated from the physical states. Our result is illustrated with numerical data for the scalar meson
Stochastic Perron's method and elementary strategies for zero-sum differential games
Sîrbu, Mihai
2013-01-01
We develop here the Stochastic Perron Method in the framework of two-player zero-sum differential games. We consider the formulation of the game where both players play, symmetrically, feed-back strategies (as in [CR09] or [PZ12]) as opposed to the Elliott-Kalton formulation prevalent in the literature. The class of feed-back strategies we use is carefully chosen so that the state equation admits strong solutions and the technicalities involved in the Stochastic Perron Method carry through in...
Kernel methods and flexible inference for complex stochastic dynamics
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Bayesian inference method for stochastic damage accumulation modeling
International Nuclear Information System (INIS)
Jiang, Xiaomo; Yuan, Yong; Liu, Xian
2013-01-01
Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.
International Nuclear Information System (INIS)
Safwat, Akmal; Bentzen, Soeren M.; Turesson, Ingela; Hendry, Jolyon H.
2002-01-01
Background: The large patient-to-patient variability in the grade of normal tissue injury after a standard course of radiotherapy is well established clinically. A better understanding of this individual variation may provide valuable insights into the pathogenesis of radiation damage and the prospects of predicting the outcome. Purpose: To estimate the relative importance of the stochastic vs. patient-related components of variability in the expression of radiation-induced normal tissue damage. Methods and Materials: The study data were selected from the dose fractionation studies of Turesson in Gothenburg. Patients treated with bilateral internal mammary fields, who completed at least 10 years of follow-up, were included. The material included 22 different fractionation schedules (11 on each side). Telangiectasia was graded on an arbitrary 6-point scale using clinical photographs of the irradiated fields. For each field, in each patient, a curve showing the grade of telangiectasia as a function of time was constructed. A measure of radioresponsiveness was obtained from the difference between the area under the curve (AUC) for a specific field in an individual patient minus the mean AUC of fields receiving the same dose fractionation schedule. As a confirmatory procedure, the same analysis was repeated with a weighted area under the curve (WAUC) approach, in which the time spent at or above each of the 5 nonzero grades was calculated for each field in each patient. These times were used as explanatory variables in a linear regression analysis of biological equivalent dose to establish statistically the weight of each grade providing the optimal relationship between dose and effect. Using these regression coefficients, the weighted area under the grade-time curve (WAUC) was estimated. Results: The AUC was significantly correlated with the isoeffective dose in 2-Gy fractions (ID2). An analysis of variance components, using the maximum likelihood method, showed that
International Nuclear Information System (INIS)
Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.
2010-01-01
Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
Adaptive Finite Element Method Assisted by Stochastic Simulation of Chemical Systems
Cotter, Simon L.; Vejchodský , Tomá š; Erban, Radek
2013-01-01
Stochastic models of chemical systems are often analyzed by solving the corresponding Fokker-Planck equation, which is a drift-diffusion partial differential equation for the probability distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with nonnegligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the stationary probability density. Numerical examples demonstrate that the presented method is competitive with existing a posteriori methods. © 2013 Society for Industrial and Applied Mathematics.
Data-driven remaining useful life prognosis techniques stochastic models, methods and applications
Si, Xiao-Sheng; Hu, Chang-Hua
2017-01-01
This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...
Kemper, A; Nishino, T; Schadschneider, A; Zittartz, J
2003-01-01
We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 10 sup 5 shows a considerable improvement on the old stochastic TMRG algorithm.
Stochastic processes, multiscale modeling, and numerical methods for computational cellular biology
2017-01-01
This book focuses on the modeling and mathematical analysis of stochastic dynamical systems along with their simulations. The collected chapters will review fundamental and current topics and approaches to dynamical systems in cellular biology. This text aims to develop improved mathematical and computational methods with which to study biological processes. At the scale of a single cell, stochasticity becomes important due to low copy numbers of biological molecules, such as mRNA and proteins that take part in biochemical reactions driving cellular processes. When trying to describe such biological processes, the traditional deterministic models are often inadequate, precisely because of these low copy numbers. This book presents stochastic models, which are necessary to account for small particle numbers and extrinsic noise sources. The complexity of these models depend upon whether the biochemical reactions are diffusion-limited or reaction-limited. In the former case, one needs to adopt the framework of s...
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
International Nuclear Information System (INIS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-01-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevin equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O(N) non-linear σ model and it is shown that singular terms appearing in the improved Langevin equation cancel out the δ n (0) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of independent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalism. (orig.)
Research on neutron noise analysis stochastic simulation method for α calculation
International Nuclear Information System (INIS)
Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang
2014-01-01
The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)
Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J
2018-05-29
Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...
The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random
Czech Academy of Sciences Publication Activity Database
Beres, Michal; Domesová, Simona
2017-01-01
Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280
Quantum Monte Carlo diagonalization method as a variational calculation
International Nuclear Information System (INIS)
Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.
1997-01-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner
International Nuclear Information System (INIS)
Subber, Waad; Sarkar, Abhijit
2012-01-01
For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.
International Nuclear Information System (INIS)
Wu, Yuhu; Kumar, Madan; Shen, Tielong
2016-01-01
Highlights: • An in-cylinder pressure based measuring method for the RGF is derived. • A stochastic logical dynamical model is proposed to represent the transient behavior of the RGF. • The receding horizon controller is designed to reduce the variance of the RGF. • The effectiveness of the proposed model and control approach is validated by the experimental evidence. - Abstract: In four stroke internal combustion engines, residual gas from the previous cycle is an important factor influencing the combustion quality of the current cycle, and the residual gas fraction (RGF) is a popular index to monitor the influence of residual gas. This paper investigates the cycle-to-cycle transient behavior of the RGF in the view of systems theory and proposes a multi-valued logic-based control strategy for attenuation of RGF fluctuation. First, an in-cylinder pressure sensor-based method for measuring the RGF is provided by following the physics of the in-cylinder transient state of four-stroke internal combustion engines. Then, the stochastic property of the RGF is examined based on statistical data obtained by conducting experiments on a full-scale gasoline engine test bench. Based on the observation of the examination, a stochastic logical transient model is proposed to represent the cycle-to-cycle transient behavior of the RGF, and with the model an optimal feedback control law, which targets on rejection of the RGF fluctuation, is derived in the framework of stochastic logical system theory. Finally, experimental results are demonstrated to show the effectiveness of the proposed model and the control strategy.
Constant Jacobian Matrix-Based Stochastic Galerkin Method for Probabilistic Load Flow
Directory of Open Access Journals (Sweden)
Yingyun Sun
2016-03-01
Full Text Available An intrusive spectral method of probabilistic load flow (PLF is proposed in the paper, which can handle the uncertainties arising from renewable energy integration. Generalized polynomial chaos (gPC expansions of dependent random variables are utilized to build a spectral stochastic representation of PLF model. Instead of solving the coupled PLF model with a traditional, cumbersome method, a modified stochastic Galerkin (SG method is proposed based on the P-Q decoupling properties of load flow in power system. By introducing two pre-calculated constant sparse Jacobian matrices, the computational burden of the SG method is significantly reduced. Two cases, IEEE 14-bus and IEEE 118-bus systems, are used to verify the computation speed and efficiency of the proposed method.
Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks
Ben Hammouda, Chiheb
2015-05-12
In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for
International Nuclear Information System (INIS)
Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus
2015-01-01
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kok Foong [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstraße 39, 10117 Berlin (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore, 637459 (Singapore)
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.
Bäck, Joakim
2010-09-17
Much attention has recently been devoted to the development of Stochastic Galerkin (SG) and Stochastic Collocation (SC) methods for uncertainty quantification. An open and relevant research topic is the comparison of these two methods. By introducing a suitable generalization of the classical sparse grid SC method, we are able to compare SG and SC on the same underlying multivariate polynomial space in terms of accuracy vs. computational work. The approximation spaces considered here include isotropic and anisotropic versions of Tensor Product (TP), Total Degree (TD), Hyperbolic Cross (HC) and Smolyak (SM) polynomials. Numerical results for linear elliptic SPDEs indicate a slight computational work advantage of isotropic SC over SG, with SC-SM and SG-TD being the best choices of approximation spaces for each method. Finally, numerical results corroborate the optimality of the theoretical estimate of anisotropy ratios introduced by the authors in a previous work for the construction of anisotropic approximation spaces. © 2011 Springer.
Rackauckas, Christopher; Nie, Qing
2017-01-01
Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.
Temporal super resolution using variational methods
DEFF Research Database (Denmark)
Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads
2010-01-01
Temporal super resolution (TSR) is the ability to convert video from one frame rate to another and is as such a key functionality in modern video processing systems. A higher frame rate than what is recorded is desired for high frame rate displays, for super slow-motion, and for video/film format...... observed when watching video on large and bright displays where the motion of high contrast edges often seem jerky and unnatural. A novel motion compensated (MC) TSR algorithm using variational methods for both optical flow calculation and the actual new frame interpolation is presented. The flow...
Fast Numerical Methods for Stochastic Partial Differential Equations
2016-04-15
Particle Swarm Optimization (PSO) method. Inspired by the social behavior of the bird flocking or fish schooling, the particle swarm optimization (PSO...Weerasinghe, Hongmei Chi and Yanzhao Cao, Particle Swarm Optimization Simulation via Optimal Halton Sequences, accepted by Procedia Computer Science (2016...Optimization Simulation via Optimal Halton Sequences, accepted by Procedia Computer Science (2016). 2. Haiyan Tian, Hongmei Chi and Yanzhao Cao
The future of stochastic and upscaling methods in hydrogeology
Nœtinger, Benoît; Artus, Vincent; Zargar, Ghassem
2005-03-01
Geological formations are complex features resulting from geological, mechanical, and physico-chemical processes occurring over a very wide range of length scales and time scales. Transport phenomena ranging from the molecular scale to several hundreds of kilometers may influence the overall behavior of fluid flow in these formations. Heterogeneities that cover a large range of spatial scales play an essential role to channel fluid-flows, especially when they are coupled with non-linearities inherent to transport processes in porous media. These issues have considerable practical importance in groundwater management, and in the oil industry, particularly in solving new problems posed by projects concerned with the trapping of CO2 in the subsurface. In order to manage this complexity, one must be able to prioritize the respective influences of various relevant geological and physico-chemical phenomena occurring at several ranges of length and time scales as well as understand and use the increasingly rich and complex geostatistical models to provide realistic simulations of subsurface conditions. Multiscale simulation of fluid transport in these formations should help engineers to focus on the crucial phenomena that control the flow. This provides a natural framework to integrate data, to solve inverse problems involving large amounts of data, resulting in a reduction of the uncertainties of the subsurface description that must be evaluated. This allows in turn the making of more relevant practical decisions. In this paper, some perspectives on the development of upscaling approaches are presented, highlighting some recent multiscale concepts, discarding the fractured media case. Upscaling can be used as a useful framework to simultaneously manage scale-dependant problems, stochastic approaches and inverse problems. Actual and potential applications of upscaling to the elaboration of subsurface models constrained to observed data, and the management of uncertainties
A Newton-Based Extremum Seeking MPPT Method for Photovoltaic Systems with Stochastic Perturbations
Directory of Open Access Journals (Sweden)
Heng Li
2014-01-01
Full Text Available Microcontroller based maximum power point tracking (MPPT has been the most popular MPPT approach in photovoltaic systems due to its high flexibility and efficiency in different photovoltaic systems. It is well known that PV systems typically operate under a range of uncertain environmental parameters and disturbances, which implies that MPPT controllers generally suffer from some unknown stochastic perturbations. To address this issue, a novel Newton-based stochastic extremum seeking MPPT method is proposed. Treating stochastic perturbations as excitation signals, the proposed MPPT controller has a good tolerance of stochastic perturbations in nature. Different from conventional gradient-based extremum seeking MPPT algorithm, the convergence rate of the proposed controller can be totally user-assignable rather than determined by unknown power map. The stability and convergence of the proposed controller are rigorously proved. We further discuss the effects of partial shading and PV module ageing on the proposed controller. Numerical simulations and experiments are conducted to show the effectiveness of the proposed MPPT algorithm.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Stochastic interpretation of magnetotelluric data, comparison of methods
Czech Academy of Sciences Publication Activity Database
Červ, Václav; Menvielle, M.; Pek, Josef
2007-01-01
Roč. 50, č. 1 (2007), s. 7-19 ISSN 1593-5213 R&D Projects: GA ČR GA205/04/0740; GA ČR GA205/04/0746; GA MŠk ME 677 Institutional research plan: CEZ:AV0Z30120515 Keywords : magnetotelluric method * inverse problem * controlled random search Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 0.298, year: 2007
Deviation-based spam-filtering method via stochastic approach
Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun
2018-03-01
In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.
A stochastic physical-mathematical method for reactor kinetics analysis
International Nuclear Information System (INIS)
Velickovic, Lj.
1966-01-01
The developed theoretical model is concerned with BF 3 counter placed in the core of a low power reactor (a few MW) where statistical neutron effects are most evident. Our experiments were somewhat different. The detector used was and ionization chamber with double sampling, in ADC and in the time analyzer. The objective of this model was not to obtain precise numerical calculations, but to explain the method and the essentials of the correlation. Introducing all the six groups of delayed neutrons and possibly photoneutrons the model could be improved to obtained more realistic results
Haji Ali, Abdul Lateef
2016-01-01
I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.
Haji Ali, Abdul Lateef
2016-01-08
I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.
A method for generating stochastic 3D tree models with Python in Autodesk Maya
Directory of Open Access Journals (Sweden)
Nemanja Stojanović
2016-12-01
Full Text Available This paper introduces a method for generating 3D tree models using stochastic L-systems with stochastic parameters and Perlin noise. L-system is the most popular method for plant modeling and Perlin noise is extensively used for generating high detailed textures. Our approach is probabilistic. L-systems with a random choice of parameters can represent observed objects quite well and they are used for modeling and generating realistic plants. Textures and normal maps are generated with combinations of Perlin noises what make these trees completely unique. Script for generating these trees, textures and normal maps is written with Python/PyMEL/NumPy in Autodesk Maya.
Validation of internal dosimetry protocols based on stochastic method
International Nuclear Information System (INIS)
Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.
2015-01-01
Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)
Validation of internal dosimetry protocols based on stochastic method
Energy Technology Data Exchange (ETDEWEB)
Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear
2015-07-01
Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)
Background field method for nonlinear σ-model in stochastic quantization
International Nuclear Information System (INIS)
Nakazawa, Naohito; Ennyu, Daiji
1988-01-01
We formulate the background field method for the nonlinear σ-model in stochastic quantization. We demonstrate a one-loop calculation for a two-dimensional non-linear σ-model on a general riemannian manifold based on our formulation. The formulation is consistent with the known results in ordinary quantization. As a simple application, we also analyse the multiplicative renormalization of the O(N) nonlinear σ-model. (orig.)
Moreno, Pablo; García, Marcelo
2016-01-01
The increase in energy consumption, especially in residential consumers, means that the electrical system should grow at pair, in infrastructure and installed capacity, the energy prices vary to meet these needs, so this paper uses the methodology of demand response using stochastic methods such as Markov, to optimize energy consumption of residential users. It is necessary to involve customers in the electrical system because in this way it can be verified the actual amount of electric charg...
DETECTION OF CHANGES OF THE SYSTEM TECHNICAL STATE USING STOCHASTIC SUBSPACE OBSERVATION METHOD
Directory of Open Access Journals (Sweden)
Andrzej Puchalski
2014-03-01
Full Text Available System diagnostics based on vibroacoustics signals, carried out by means of stochastic subspace methods was undertaken in the hereby paper. Subspace methods are the ones based on numerical linear algebra tools. The considered solutions belong to diagnostic methods according to data, leading to the generation of residuals allowing failure recognition of elements and assemblies in machines and devices. The algorithm of diagnostics according to the subspace observation method was applied – in the paper – for the estimation of the valve system of the spark ignition engine.
Multi-fidelity stochastic collocation method for computation of statistical moments
Energy Technology Data Exchange (ETDEWEB)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu [Department of Mathematics, University of Iowa, Iowa City, IA 52242 (United States); Linebarger, Erin M., E-mail: aerinline@sci.utah.edu [Department of Mathematics, University of Utah, Salt Lake City, UT 84112 (United States); Xiu, Dongbin, E-mail: xiu.16@osu.edu [Department of Mathematics, The Ohio State University, Columbus, OH 43210 (United States)
2017-07-15
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
International Nuclear Information System (INIS)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Beck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul
2014-01-01
In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.
Empirical method to measure stochasticity and multifractality in nonlinear time series
Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping
2013-12-01
An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.
Hozman, J.; Tichý, T.
2016-12-01
The paper is based on the results from our recent research on multidimensional option pricing problems. We focus on European option valuation when the price movement of the underlying asset is driven by a stochastic volatility following a square root process proposed by Heston. The stochastic approach incorporates a new additional spatial variable into this model and makes it very robust, i.e. it provides a framework to price a variety of options that is closer to reality. The main topic is to present the numerical scheme arising from the concept of discontinuous Galerkin methods and applicable to the Heston option pricing model. The numerical results are presented on artificial benchmarks as well as on reference market data.
Beck, Joakim
2014-03-01
In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.
On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods
Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo
2012-01-01
In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Wang, Ting; Plecháč, Petr
2017-12-21
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods
Beck, Joakim
2012-09-01
In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.
An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems
Kuwahara, Hiroyuki
2011-01-01
Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.
International Nuclear Information System (INIS)
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-01-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review
Energy Technology Data Exchange (ETDEWEB)
Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-08-01
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.
Analysis methods of stochastic transient electro–magnetic processes in electric traction system
Directory of Open Access Journals (Sweden)
T. M. Mishchenko
2013-04-01
Full Text Available Purpose. The essence and basic characteristics of calculation methods of transient electromagnetic processes in the elements and devices of non–linear dynamic electric traction systems taking into account the stochastic changes of voltages and currents in traction networks of power supply subsystem and power circuits of electric rolling stock are developed. Methodology. Classical methods and the methods of non–linear electric engineering, as well as probability theory method, especially the methods of stationary ergodic and non–stationary stochastic processes application are used in the research. Findings. Using the above-mentioned methods an equivalent circuit and the system of nonlinear integra–differential equations for electromagnetic condition of the double–track inter-substation zone of alternating current electric traction system are drawn up. Calculations allow obtaining electric traction current distribution in the areas of feeder zones. Originality. First of all the paper is interesting and important from scientific point of view due to the methods, which allow taking into account probabilistic character of change for traction voltages and electric traction system currents. On the second hand the researches develop the most efficient methods of nonlinear circuits’ analysis. Practical value. The practical value of the research is presented in application of the methods to the analysis of electromagnetic and electric energy processes in the traction power supply system in the case of high-speed train traffic.
Parzen, Emanuel
1962-01-01
Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.
. The data are revisited for objective mapping of the temperature fields using Stochastic Inverse Method. Hourly reciprocal transmissions were carried with time lag of 30 minutes between each direction. From the multipath arrival patterns, significant peaks...
Model reduction method using variable-separation for stochastic saddle point problems
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Energy Technology Data Exchange (ETDEWEB)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn [Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, Beijing, 100190 (China); Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn [Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, Beijing, 100190 (China); Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn [Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, Beijing, 100190 (China); Zhou, Weien, E-mail: weienzhou@nudt.edu.cn [College of Science, National University of Defense Technology, Changsha 410073 (China)
2017-08-01
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
International Nuclear Information System (INIS)
Cui, Jianbo; Hong, Jialin; Liu, Zhihui; Zhou, Weien
2017-01-01
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data
Liang, Faming
2013-03-01
The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.
2017-11-01
We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.
Variational methods for chemical and nuclear reactions
International Nuclear Information System (INIS)
Crawford, O.H.
1977-01-01
All the variational functionals are derived which satisfy certain criteria of suitability for molecular and nuclear scattering, below the threshold energy for three-body breakup. The existence and uniqueness of solutions are proven. The most general suitable functional is specialized, by particular values of its parameters, to Kohn's taneta, Kato's cot(eta-theta), the inverse Kohn coeta, Kohn's S matrix, our S matrix, Lane and Robson's functional, and several new functionals, an infinite number of which are contained in the general expression. Four general ways of deriving algebraic methods from a given functional are discussed, and illustrated with specific algebraic results. These include equations of Lane and Robson and of Kohn, the fundamental R matrix relation, and new equations. The relative configuration space is divided as in the Wigner R matrix theory, and trial wavefunctions are needed for only the region where all the particles are interacting. In addition, a version of the general functional is presented which does not require any division of space
International Nuclear Information System (INIS)
Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.
2015-01-01
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
Pang, Kar Mun; Jangi, Mehdi; Bai, X.-S.; Schramm, Jesper; Walther, Jens Honore
2016-01-01
The use of transported Probability Density Function(PDF) methods allows a single model to compute the autoignition, premixed mode and diffusion flame of diesel combustion under engine-like conditions [1,2]. The Lagrangian particle based transported PDF models have been validated across a wide range of conditions [2,3]. Alternatively, the transported PDF model can also be formulated in the Eulerian framework[4]. The Eulerian PDF is commonly known as the Eulerian Stochastic Fields (ESF) model. ...
A moment-convergence method for stochastic analysis of biochemical reaction networks.
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
International Nuclear Information System (INIS)
Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.
2015-01-01
Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
A moment-convergence method for stochastic analysis of biochemical reaction networks
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiajun [School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China); Nie, Qing [Department of Mathematics, University of California at Irvine, Irvine, California 92697 (United States); Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn [School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China); Guangdong Province Key Laboratory of Computational Science and School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China)
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
An equilibrium for frustrated quantum spin systems in the stochastic state selection method
International Nuclear Information System (INIS)
Munehisa, Tomo; Munehisa, Yasuko
2007-01-01
We develop a new method to calculate eigenvalues in frustrated quantum spin models. It is based on the stochastic state selection (SSS) method, which is an unconventional Monte Carlo technique that we have investigated in recent years. We observe that a kind of equilibrium is realized under some conditions when we repeatedly operate a Hamiltonian and a random choice operator, which is defined by stochastic variables in the SSS method, to a trial state. In this equilibrium, which we call the SSS equilibrium, we can evaluate the lowest eigenvalue of the Hamiltonian using the statistical average of the normalization factor of the generated state. The SSS equilibrium itself has already been observed in unfrustrated models. Our study in this paper shows that we can also see the equilibrium in frustrated models, with some restriction on values of a parameter introduced in the SSS method. As a concrete example, we employ the spin-1/2 frustrated J 1 -J 2 Heisenberg model on the square lattice. We present numerical results on the 20-, 32-, and 36-site systems, which demonstrate that statistical averages of the normalization factors reproduce the known exact eigenvalue to good precision. Finally, we apply the method to the 40-site system. Then we obtain the value of the lowest energy eigenvalue with an error of less than 0.2%
Directory of Open Access Journals (Sweden)
Marino Luiz Eyerkaufer
2014-12-01
Full Text Available Traditionally, the process of estimating the quantitative predictions of the strategic plan through the budget happens as from the deterministic data, together with analysis of factors of internal and external environments. As from the budget data decisions are made, often before the fact, which creates uncertainty as to the assertiveness of forecasts. Combined with the traditional preparation methods of corporate budget forecasts, this study presents an application of stochastic methods where the probabilism is presented as an alternative for the minimization of uncertainties related to the assertiveness of the estimates. It also demonstrates itself, as from a practical application, the use of the Monte Carlo method in the sales forecasting; at the same time it is tested the probability of these sales forecasting be materialized within certain intervals that meet the investors’ expectations, by using the limit central theorem and, finally, by using the absorbing Markov chain, it is demonstrated the overall performance of the system as from the funds input and output. The study was limited to a basic application of stochastic methods as from a hypothetical case which, however, allowed to conclude that both methods, together or separately, can minimize the effects of uncertainty in budget forecasts.
Khan, Sami Ullah; Ali, Ishtiaq
2018-03-01
Explicit solutions to delay differential equation (DDE) and stochastic delay differential equation (SDDE) can rarely be obtained, therefore numerical methods are adopted to solve these DDE and SDDE. While on the other hand due to unstable nature of both DDE and SDDE numerical solutions are also not straight forward and required more attention. In this study, we derive an efficient numerical scheme for DDE and SDDE based on Legendre spectral-collocation method, which proved to be numerical methods that can significantly speed up the computation. The method transforms the given differential equation into a matrix equation by means of Legendre collocation points which correspond to a system of algebraic equations with unknown Legendre coefficients. The efficiency of the proposed method is confirmed by some numerical examples. We found that our numerical technique has a very good agreement with other methods with less computational effort.
Energy Technology Data Exchange (ETDEWEB)
Velickovic, Lj; Petrovic, M [Boris Kidric Institute of nuclear sciences Vinca, Belgrade (Yugoslavia)
1968-12-15
Stochastic reactor oscillator and cross correlation method were used for determining reactor dynamics characteristics. Experimental equipment, fast reactor oscillator (BOR-1) was activated by random pulses from the GBS-16 generator. Tape recorder AMPEX-SF-300 and data acquisition tool registered reactor response to perturbations having different frequencies. Reactor response and activation signals were cross correlated by digital computer for different positions of stochastic oscillator and ionization chamber.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2013-01-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates
A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data
Babuška, Ivo; Nobile, Fabio; Tempone, Raul
2010-01-01
This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Dai, Kaoshan; Wang, Ying; Lu, Wensheng; Ren, Xiaosong; Huang, Zhenhua
2017-04-01
Structural health monitoring (SHM) of wind turbines has been applied in the wind energy industry to obtain their real-time vibration parameters and to ensure their optimum performance. For SHM, the accuracy of its results and the efficiency of its measurement methodology and data processing algorithm are the two major concerns. Selection of proper measurement parameters could improve such accuracy and efficiency. The Stochastic Subspace Identification (SSI) is a widely used data processing algorithm for SHM. This research discussed the accuracy and efficiency of SHM using SSI method to identify vibration parameters of on-line wind turbine towers. Proper measurement parameters, such as optimum measurement duration, are recommended.
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Directory of Open Access Journals (Sweden)
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Directory of Open Access Journals (Sweden)
Driss Sarsri
2014-05-01
Full Text Available In this paper, we propose a method to calculate the first two moments (mean and variance of the structural dynamics response of a structure with uncertain variables and subjected to random excitation. For this, Newmark method is used to transform the equation of motion of the structure into a quasistatic equilibrium equation in the time domain. The Neumann development method was coupled with Monte Carlo simulations to calculate the statistical values of the random response. The use of modal synthesis methods can reduce the dimensions of the model before integration of the equation of motion. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks
Ben Hammouda, Chiheb
2015-01-01
-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti
International Nuclear Information System (INIS)
Langrene, Nicolas
2014-01-01
This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)
An evaluation method for tornado missile strike probability with stochastic correction
International Nuclear Information System (INIS)
Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo
2017-01-01
An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure
An evaluation method for tornado missile strike probability with stochastic correction
Energy Technology Data Exchange (ETDEWEB)
Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)
2017-03-15
An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.
Phase stability analysis of liquid-liquid equilibrium with stochastic methods
Directory of Open Access Journals (Sweden)
G. Nagatani
2008-09-01
Full Text Available Minimization of Gibbs free energy using activity coefficient models and nonlinear equation solution techniques is commonly applied to phase stability problems. However, when conventional techniques, such as the Newton-Raphson method, are employed, serious convergence problems may arise. Due to the existence of multiple solutions, several problems can be found in modeling liquid-liquid equilibrium of multicomponent systems, which are highly dependent on the initial guess. In this work phase stability analysis of liquid-liquid equilibrium is investigated using the NRTL model. For this purpose, two distinct stochastic numerical algorithms are employed to minimize the tangent plane distance of Gibbs free energy: a subdivision algorithm that can find all roots of nonlinear equations for liquid-liquid stability analysis and the Simulated Annealing method. Results obtained in this work for the two stochastic algorithms are compared with those of the Interval Newton method from the literature. Several different binary and multicomponent systems from the literature were successfully investigated.
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.
Zhang, Tingting; Kou, S C
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.
Stochastic Fatigue Analysis of Jacket Type Offshore Structures
DEFF Research Database (Denmark)
Sigurdsson, Gudfinnur
In this paper, a stochastic reliability assessment for jacket type offshore structures subjected to wave loads in deep water environments is outlined. In the reliability assessment, structural and loading uncertainties are taken into account by means of some stochastic variables. To estimate stat...... statistical measures of structural stress variations the modal spectral analysis method is applied....
Mean square exponential stability of stochastic delayed Hopfield neural networks
International Nuclear Information System (INIS)
Wan Li; Sun Jianhua
2005-01-01
Stochastic effects to the stability property of Hopfield neural networks (HNN) with discrete and continuously distributed delay are considered. By using the method of variation parameter, inequality technique and stochastic analysis, the sufficient conditions to guarantee the mean square exponential stability of an equilibrium solution are given. Two examples are also given to demonstrate our results
Variation and Commonality in Phenomenographic Research Methods
Akerlind, Gerlese S.
2012-01-01
This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…
Kall, Peter
1998-01-01
Optimization problems arising in practice usually contain several random parameters. Hence, in order to obtain optimal solutions being robust with respect to random parameter variations, the mostly available statistical information about the random parameters should be considered already at the planning phase. The original problem with random parameters must be replaced by an appropriate deterministic substitute problem, and efficient numerical solution or approximation techniques have to be developed for those problems. This proceedings volume contains a selection of papers on modelling techniques, approximation methods, numerical solution procedures for stochastic optimization problems and applications to the reliability-based optimization of concrete technical or economic systems.
Determination of kinetics parameters using stochastic methods in a 252Cf system
International Nuclear Information System (INIS)
Difilippo, F.C.
1988-01-01
Safety analysis and control system design of nuclear systems require the knowledge of neutron kinetics related parameters like effective delayed neutron fraction, neutron lifetime, time between neutron generations and subcriticality margins. Many methods, deterministic and stochastic, are being used, some since the beginning of nuclear power, to measure these important parameters. The method based on the use of the 252 Cf neutron source has been under intense study at the Oak Ridge National Laboratory, both experimentally and theoretically, during the last years. The increasing demand for this isotope in industrial and medical applications and new designs of advanced high flux reactors to produce it make the isotope available as neutron source (only few micrograms are necessary). A thin layer of 252 Cf is deposited in one of the electrodes of a fission chamber which produces pulses each time the 252 Cf disintegrates via α or spontaneous fission decay; the smaller pulses associated with the α decay can be easily discriminated with the important result that we known the time when v/sub c/ neutrons are injected into the system (number of neutrons per fission of 252 Cf). Thus, a small (few cm 3 ) and nonintrusive device can be used as a random pulsed neutron source with known natural properties that do no depend on biases associated with more complex interrogating devices like accelerators. This paper presents a general formalism that relates the kinetics parameters with stochastic descriptors that naturally appear because of the random nature of the production and transport of neutrons
Method for measuring the stochastic properties of corona and partial-discharge pulses
International Nuclear Information System (INIS)
Van Brunt, R.J.; Kulkarni, S.V.
1989-01-01
A new method is described for measuring the stochastic behavior of corona and partial-discharge pulses which utilizes a pulse selection and sorting circuit in conjunction with a computer-controlled multichannel analyzer to directly measure various conditional and unconditional pulse-height and pulse-time-separation distributions. From these measured distributions it is possible to determine the degree of correlation between successive discharge pulses. Examples are given of results obtained from measurements on negative, point-to-plane (Trichel-type) corona pulses in a N 2 /O 2 gas mixture which clearly demonstrate that the phenomenon is inherently stochastic in the sense that development of a discharge pulse is significantly affected by the amplitude of and time separation from the preceding pulse. It is found, for example, that corona discharge pulse amplitude and time separation from an earlier pulse are not independent random variables. Discussions are given about the limitations of the method, sources of error, and data analysis procedures required to determine self-consistency of the various measured distributions
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Mellin Transform Method for European Option Pricing with Hull-White Stochastic Interest Rate
Directory of Open Access Journals (Sweden)
Ji-Hun Yoon
2014-01-01
Full Text Available Even though interest rates fluctuate randomly in the marketplace, many option-pricing models do not fully consider their stochastic nature owing to their generally limited impact on option prices. However, stochastic dynamics in stochastic interest rates may have a significant impact on option prices as we take account of issues of maturity, hedging, or stochastic volatility. In this paper, we derive a closed form solution for European options in Black-Scholes model with stochastic interest rate using Mellin transform techniques.
Multistep Hybrid Extragradient Method for Triple Hierarchical Variational Inequalities
Directory of Open Access Journals (Sweden)
Zhao-Rong Kong
2013-01-01
Full Text Available We consider a triple hierarchical variational inequality problem (THVIP, that is, a variational inequality problem defined over the set of solutions of another variational inequality problem which is defined over the intersection of the fixed point set of a strict pseudocontractive mapping and the solution set of the classical variational inequality problem. Moreover, we propose a multistep hybrid extragradient method to compute the approximate solutions of the THVIP and present the convergence analysis of the sequence generated by the proposed method. We also derive a solution method for solving a system of hierarchical variational inequalities (SHVI, that is, a system of variational inequalities defined over the intersection of the fixed point set of a strict pseudocontractive mapping and the solution set of the classical variational inequality problem. Under very mild conditions, it is proven that the sequence generated by the proposed method converges strongly to a unique solution of the SHVI.
International Nuclear Information System (INIS)
Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-01-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Directory of Open Access Journals (Sweden)
Ling Zhang
2017-10-01
Full Text Available Abstract The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs. It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order 1 2 $\\frac{1}{2}$ to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
The stochastic finite element methods with applications in geotechnics and rupture mechanics
International Nuclear Information System (INIS)
Baldeweck, Herve
1999-01-01
After having presented and classified the various stochastic finite elements methods, notably by distinguishing reliability methods (first order and second order reliability methods, response surfaces, Monte Carlo) and sensitivity methods (Monte Carlo, spectral development, perturbation, weighted integrals), the author of this research thesis presents basic tools needed for different theoretical developments: hazard representation and method of moments. He also presents the problem which is used all along this work to compare and assess the different sensitivity methods. Then, he reports the theoretical development of these sensitivity methods: the Monte Carlo method, the spectral development method, the perturbation method, and the quadrature method. This last one is a new one aimed at the assessment of statistical moments. The author highlights the relationships between reliability and sensitivity methods. In the third part, several applications and calculations are reported. Applications are in geotechnics (soil-structure interaction, calculation of soil stiffness, application in the field of geo-materials with the calculation of an underground gallery), and in rupture mechanics (international benchmark on the reliability of a nuclear reactor, non linear calculation of a cracked straight pipe, reliability calculation of a cracked plate with a Young modulus being a random field) [fr
Using the Screened Coulomb Potential to Illustrate the Variational Method
Zuniga, Jose; Bastida, Adolfo; Requena, Alberto
2012-01-01
The screened Coulomb potential, or Yukawa potential, is used to illustrate the application of the single and linear variational methods. The trial variational functions are expressed in terms of Slater-type functions, for which the integrals needed to carry out the variational calculations are easily evaluated in closed form. The variational…
A multigrid method for variational inequalities
Energy Technology Data Exchange (ETDEWEB)
Oliveira, S.; Stewart, D.E.; Wu, W.
1996-12-31
Multigrid methods have been used with great success for solving elliptic partial differential equations. Penalty methods have been successful in solving finite-dimensional quadratic programs. In this paper these two techniques are combined to give a fast method for solving obstacle problems. A nonlinear penalized problem is solved using Newton`s method for large values of a penalty parameter. Multigrid methods are used to solve the linear systems in Newton`s method. The overall numerical method developed is based on an exterior penalty function, and numerical results showing the performance of the method have been obtained.
Schilde, M; Doerner, K F; Hartl, R F
2014-10-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.
Variational iteration method for one dimensional nonlinear thermoelasticity
International Nuclear Information System (INIS)
Sweilam, N.H.; Khader, M.M.
2007-01-01
This paper applies the variational iteration method to solve the Cauchy problem arising in one dimensional nonlinear thermoelasticity. The advantage of this method is to overcome the difficulty of calculation of Adomian's polynomials in the Adomian's decomposition method. The numerical results of this method are compared with the exact solution of an artificial model to show the efficiency of the method. The approximate solutions show that the variational iteration method is a powerful mathematical tool for solving nonlinear problems
Survey Shows Variation in Ph.D. Methods Training.
Steeves, Leslie; And Others
1983-01-01
Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)
Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations
Haji-Ali, Abdul Lateef
2016-05-22
of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).
Stochastic Unit Commitment Based on Multi-Scenario Tree Method Considering Uncertainty
Directory of Open Access Journals (Sweden)
Kyu-Hyung Jo
2018-03-01
Full Text Available With the increasing penetration of renewable energy, it is difficult to schedule unit commitment (UC in a power system because of the uncertainty associated with various factors. In this paper, a new solution procedure based on a multi-scenario tree method (MSTM is presented and applied to the proposed stochastic UC problem. In this process, the initial input data of load and wind power are modeled as different levels using the mean absolute percentage error (MAPE. The load and wind scenarios are generated using Monte Carlo simulation (MCS that considers forecasting errors. These multiple scenarios are applied in the MSTM for solving the stochastic UC problem, including not only the load and wind power uncertainties, but also sudden outages of the thermal unit. When the UC problem has been formulated, the simulation is conducted for 24-h period by using the short-term UC model, and the operating costs and additional reserve requirements are thus obtained. The effectiveness of the proposed solution approach is demonstrated through a case study based on a modified IEEE-118 bus test system.
Directory of Open Access Journals (Sweden)
Beljić Željko
2017-01-01
Full Text Available In this paper a special case of digital stochastic measurement of the third power of definite integral of sinusoidal signal’s absolute value, using 2-bit AD converters is presented. This case of digital stochastic method had emerged from the need to measure power and energy of the wind. Power and energy are proportional to the third power of wind speed. Anemometer output signal is sinusoidal. Therefore an integral of the third power of sinusoidal signal is zero. Two approaches are proposed for the third power calculation of the wind speed signal. One approach is to use absolute value of sinusoidal signal (before AD conversion for which there is no need of multiplier hardware change. The second approach requires small multiplier hardware change, but input signal remains unchanged. For the second approach proposed minimal hardware change was made to calculate absolute value of the result after AD conversion. Simulations have confirmed theoretical analysis. Expected precision of wind energy measurement of proposed device is better than 0,00051% of full scale. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR32019
The two-regime method for optimizing stochastic reaction-diffusion simulations
Flegg, M. B.
2011-10-19
Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.
Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad
2016-02-01
Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients' satisfaction, be considered in the future studies to assess hospitals' performances.
Time dependent variational method in quantum mechanics
International Nuclear Information System (INIS)
Torres del Castillo, G.F.
1987-01-01
Using the fact that the solutions to the time-dependent Schodinger equation can be obtained from a variational principle, by restricting the evolution of the state vector to some surface in the corresponding Hilbert space, approximations to the exact solutions can be obtained, which are determined by equations similar to Hamilton's equations. It is shown that, in order for the approximate evolution to be well defined on a given surface, the imaginary part of the inner product restricted to the surface must be non-singular. (author)
CAM Stochastic Volatility Model for Option Pricing
Directory of Open Access Journals (Sweden)
Wanwan Huang
2016-01-01
Full Text Available The coupled additive and multiplicative (CAM noises model is a stochastic volatility model for derivative pricing. Unlike the other stochastic volatility models in the literature, the CAM model uses two Brownian motions, one multiplicative and one additive, to model the volatility process. We provide empirical evidence that suggests a nontrivial relationship between the kurtosis and skewness of asset prices and that the CAM model is able to capture this relationship, whereas the traditional stochastic volatility models cannot. We introduce a control variate method and Monte Carlo estimators for some of the sensitivities (Greeks of the model. We also derive an approximation for the characteristic function of the model.
A stochastic multiscale method for the elastodynamic wave equation arising from fiber composites
Babuška, Ivo; Motamed, Mohammad; Tempone, Raul
2014-01-01
We present a stochastic multilevel global–local algorithm for computing elastic waves propagating in fiber-reinforced composite materials. Here, the materials properties and the size and location of fibers may be random. The method aims at approximating statistical moments of some given quantities of interest, such as stresses, in regions of relatively small size, e.g. hot spots or zones that are deemed vulnerable to failure. For a fiber-reinforced cross-plied laminate, we introduce three problems (macro, meso, micro) corresponding to the three natural scales, namely the sizes of laminate, ply, and fiber. The algorithm uses the homogenized global solution to construct a good local approximation that captures the microscale features of the real solution. We perform numerical experiments to show the applicability and efficiency of the method.
A stochastic multiscale method for the elastodynamic wave equation arising from fiber composites
Babuška, Ivo
2014-03-21
We present a stochastic multilevel global–local algorithm for computing elastic waves propagating in fiber-reinforced composite materials. Here, the materials properties and the size and location of fibers may be random. The method aims at approximating statistical moments of some given quantities of interest, such as stresses, in regions of relatively small size, e.g. hot spots or zones that are deemed vulnerable to failure. For a fiber-reinforced cross-plied laminate, we introduce three problems (macro, meso, micro) corresponding to the three natural scales, namely the sizes of laminate, ply, and fiber. The algorithm uses the homogenized global solution to construct a good local approximation that captures the microscale features of the real solution. We perform numerical experiments to show the applicability and efficiency of the method.
A Stochastic Multiscale Method for the Elastic Wave Equations Arising from Fiber Composites
Babuska, Ivo
2016-01-06
We present a stochastic multilevel global-local algorithm [1] for computing elastic waves propagating in fiber-reinforced polymer composites, where the material properties and the size and distribution of fibers in the polymer matrix may be random. The method aims at approximating statistical moments of some given quantities of interest, such as stresses, in regions of relatively small size, e.g. hot spots or zones that are deemed vulnerable to failure. For a fiber-reinforced cross-plied laminate, we introduce three problems: 1) macro; 2) meso; and 3) micro problems, corresponding to the three natural length scales: 1) the sizes of plate; 2) the tickles of plies; and 3) and the diameter of fibers. The algorithm uses a homogenized global solution to construct a local approximation that captures the microscale features of the problem. We perform numerical experiments to show the applicability and efficiency of the method.
International Nuclear Information System (INIS)
Liu, L.; Fuller, G.A.; Huang, G.H.
1999-01-01
Contamination of soil and water and the resulting threat to public health and the environment are the frequent results of oil spills, leaks and other releases of gasoline, diesel fuel, heating oil and other petroleum products. Integrating an analytical groundwater solute transport model within its general framework, this paper proposes an integrated stochastic risk assessment method and ways to apply it to petroleum-contaminated sites. Both the analytical solute transport model and the general risk assessment framework are solved by the Monte Carlo simulation technique for approaching the theoretical output distribution. Results of this study show that the total cancer risk has approximately log-normal distribution, irrespective of the fact that a variety of distributions were used to define the related parameters. It is claimed that the method can improve the effectiveness of the risk assessment for subsurface, and provide useful result for site remediation decisions. 23 refs., 3 tabs., 4 figs
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
DEFF Research Database (Denmark)
Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef
2014-01-01
The Euler-Maruyama method is applied to a simple stochastic differential equation (SDE) with discontinuous drift. Convergence aspects are investigated in the case, where the Euler-Maruyama method is simulated in dyadic points. A strong rate of convergence is presented for the numerical simulations...
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.
2018-01-01
We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.
Directory of Open Access Journals (Sweden)
Huiru Zhao
2016-01-01
Full Text Available As an efficient way to deal with the global climate change and energy shortage problems, a strong, self-healing, compatible, economic and integrative smart gird is under construction in China, which is supported by large amounts of investments and advanced technologies. To promote the construction, operation and sustainable development of Strong Smart Grid (SSG, a novel hybrid framework for evaluating the performance of SSG is proposed from the perspective of sustainability. Based on a literature review, experts’ opinions and the technical characteristics of SSG, the evaluation model involves four sustainability criteria defined as economy, society, environment and technology aspects associated with 12 sub-criteria. Considering the ambiguity and vagueness of the subjective judgments on sub-criteria, fuzzy TOPSIS method is employed to evaluate the performance of SSG. In addition, different from previous research, this paper adopts the stochastic Analytical Hierarchy Process (AHP method to upgrade the traditional Technique for Order Preference by Similarity to Ideal Solution (TOPSIS by addressing the fuzzy and stochastic factors within weights calculation. Finally, four regional smart grids in China are ranked by employing the proposed framework. The results show that the sub-criteria affiliated with environment obtain much more attention than that of economy from experts group. Moreover, the sensitivity analysis indicates the ranking list remains stable no matter how sub-criteria weights are changed, which verifies the robustness and effectiveness of the proposed model and evaluation results. This study provides a comprehensive and effective method for performance evaluation of SSG and also innovates the weights calculation for traditional TOPSIS.
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
A Modified Alternating Direction Method for Variational Inequality Problems
International Nuclear Information System (INIS)
Han, D.
2002-01-01
The alternating direction method is an attractive method for solving large-scale variational inequality problems whenever the subproblems can be solved efficiently. However, the subproblems are still variational inequality problems, which are as structurally difficult to solve as the original one. To overcome this disadvantage, in this paper we propose a new alternating direction method for solving a class of nonlinear monotone variational inequality problems. In each iteration the method just makes an orthogonal projection to a simple set and some function evaluations. We report some preliminary computational results to illustrate the efficiency of the method
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
Energy Technology Data Exchange (ETDEWEB)
Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies
2018-03-29
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
International Nuclear Information System (INIS)
Labadi, Karim; Saggadi, Samira; Amodeo, Lionel
2009-01-01
The dynamic behavior of a discrete event dynamic system can be significantly affected for some uncertain changes in its decision parameters. So, parameter sensitivity analysis would be a useful way in studying the effects of these changes on the system performance. In the past, the sensitivity analysis approaches are frequently based on simulation models. In recent years, formal methods based on stochastic process including Markov process are proposed in the literature. In this paper, we are interested in the parameter sensitivity analysis of discrete event dynamic systems by using stochastic Petri nets models as a tool for modelling and performance evaluation. A sensitivity analysis approach based on stochastic Petri nets, called PSA-SPN method, will be proposed with an application to a production line system.
Directory of Open Access Journals (Sweden)
Hoi Ying Wong
2013-01-01
Full Text Available Turbo warrants are liquidly traded financial derivative securities in over-the-counter and exchange markets in Asia and Europe. The structure of turbo warrants is similar to barrier options, but a lookback rebate will be paid if the barrier is crossed by the underlying asset price. Therefore, the turbo warrant price satisfies a partial differential equation (PDE with a boundary condition that depends on another boundary-value problem (BVP of PDE. Due to the highly complicated structure of turbo warrants, their valuation presents a challenging problem in the field of financial mathematics. This paper applies the homotopy analysis method to construct an analytic pricing formula for turbo warrants under stochastic volatility in a PDE framework.
Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties
Directory of Open Access Journals (Sweden)
Qinghai Zhao
2015-01-01
Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.
A Modified Computational Scheme for the Stochastic Perturbation Finite Element Method
Directory of Open Access Journals (Sweden)
Feng Wu
Full Text Available Abstract A modified computational scheme of the stochastic perturbation finite element method (SPFEM is developed for structures with low-level uncertainties. The proposed scheme can provide second-order estimates of the mean and variance without differentiating the system matrices with respect to the random variables. When the proposed scheme is used, it involves finite analyses of deterministic systems. In the case of one random variable with a symmetric probability density function, the proposed computational scheme can even provide a result with fifth-order accuracy. Compared with the traditional computational scheme of SPFEM, the proposed scheme is more convenient for numerical implementation. Four numerical examples demonstrate that the proposed scheme can be used in linear or nonlinear structures with correlated or uncorrelated random variables.
Solution of stochastic media transport problems using a numerical quadrature-based method
International Nuclear Information System (INIS)
Pautz, S. D.; Franke, B. C.; Prinja, A. K.; Olson, A. J.
2013-01-01
We present a new conceptual framework for analyzing transport problems in random media. We decompose such problems into stratified subproblems according to the number of material pseudo-interfaces within realizations. For a given subproblem we assign pseudo-interface locations in each realization according to product quadrature rules, which allows us to deterministically generate a fixed number of realizations. Quadrature integration of the solutions of these realizations thus approximately solves each subproblem; the weighted superposition of solutions of the subproblems approximately solves the general stochastic media transport problem. We revisit some benchmark problems to determine the accuracy and efficiency of this approach in comparison to randomly generated realizations. We find that this method is very accurate and fast when the number of pseudo-interfaces in a problem is generally low, but that these advantages quickly degrade as the number of pseudo-interfaces increases. (authors)
A Stochastic and Holistic Method to Support Decision-Making in Early Building Design
DEFF Research Database (Denmark)
Østergaard, Torben; Maagaard, Steffen; Jensen, Rasmus Lund
2015-01-01
preferable input domains for the most influential parameters. To enable computationally fast simulations, we combined calculations of energy demand and thermal comfort based on ISO 13790 (CEN 2008) with a regression model for daylight factor. We constructed scoring functions for the three outputs and applied...... to collect the 10 % best performing simulations. From this collection, histograms were used to identify favourable and adverse input spans for a selection of the most sensitive parameters. Subsequently, two runs of each 3000 simulations were performed – one using the favourable input spans and the other...... using the adverse spans. The results showed that the distribution related to favourable input spans was shifted significantly towards higher holistic scores. The authors conclude that the use of a stochastic, holistic method can guide decision-making by identifying favourable input regions, and thereby...
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Multi-Index Monte Carlo and stochastic collocation methods for random PDEs
Nobile, Fabio
2016-01-09
In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.
Multi-Index Monte Carlo and stochastic collocation methods for random PDEs
Nobile, Fabio; Haji Ali, Abdul Lateef; Tamellini, Lorenzo; Tempone, Raul
2016-01-01
In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.
A novel method for unsteady flow field segmentation based on stochastic similarity of direction
Omata, Noriyasu; Shirayama, Susumu
2018-04-01
Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.
Rouz, Omid Farkhondeh; Ahmadian, Davood; Milev, Mariyan
2017-12-01
This paper establishes exponential mean square stability of two classes of theta Milstein methods, namely split-step theta Milstein (SSTM) method and stochastic theta Milstein (STM) method, for stochastic differential delay equations (SDDEs). We consider the SDDEs problem under a coupled monotone condition on drift and diffusion coefficients, as well as a necessary linear growth condition on the last term of theta Milstein method. It is proved that the SSTM method with θ ∈ [0, ½] can recover the exponential mean square stability of the exact solution with some restrictive conditions on stepsize, but for θ ∈ (½, 1], we proved that the stability results hold for any stepsize. Then, based on the stability results of SSTM method, we examine the exponential mean square stability of the STM method and obtain the similar stability results to that of the SSTM method. In the numerical section the figures show thevalidity of our claims.
A stochastic collocation method for the second order wave equation with a discontinuous random speed
Motamed, Mohammad
2012-08-31
In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical space and depends on a finite number of random variables. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. This approach leads to the solution of uncoupled deterministic problems as in the Monte Carlo method. We consider both full and sparse tensor product spaces of orthogonal polynomials. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points for full and sparse tensor product spaces and under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems, the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence may only be algebraic. An exponential/fast rate of convergence is still possible for some quantities of interest and for the wave solution with particular types of data. We present numerical examples, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo method for this class of problems. © 2012 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Alan Delgado de Oliveira
Full Text Available ABSTRACT In this paper, we provide an empirical discussion of the differences among some scenario tree-generation approaches for stochastic programming. We consider the classical Monte Carlo sampling and Moment matching methods. Moreover, we test the Resampled average approximation, which is an adaptation of Monte Carlo sampling and Monte Carlo with naive allocation strategy as the benchmark. We test the empirical effects of each approach on the stability of the problem objective function and initial portfolio allocation, using a multistage stochastic chance-constrained asset-liability management (ALM model as the application. The Moment matching and Resampled average approximation are more stable than the other two strategies.
Directory of Open Access Journals (Sweden)
Jian Dang
2016-01-01
Full Text Available Due to the fact that the slight fault signals in early failure of mechanical system are usually submerged in heavy background noise, it is unfeasible to extract the weak fault feature via the traditional vibration analysis. Stochastic resonance (SR, as a method of utilizing noise to amplify weak signals in nonlinear dynamical systems, can detect weak signals overwhelmed in the noise. However, based on the analysis of the impact of noise intensity on SR effect, it is concluded that the detection results are dramatically limited by the noise intensity of measured signals, especially for incipient fault feature of mechanical system with poor working environment. Therefore, this paper proposes a partly Duffing oscillator SR method to extract the fault feature of mechanical system. In this method, to locate the appearance of weak fault feature and decrease noise intensity, the permutation entropy index is constructed to select the measured signals for the input of Duffing oscillator system. Then, according to the regulation of system parameters, a reasonable match between the selected signals and Duffing oscillator model is achieved to produce a SR phenomenon and realize the fault diagnosis of mechanical system. Experiment results demonstrate that the proposed method achieves a better effect on the fault diagnosis of mechanical system.
Hybrid Steepest-Descent Methods for Triple Hierarchical Variational Inequalities
Directory of Open Access Journals (Sweden)
L. C. Ceng
2015-01-01
Full Text Available We introduce and analyze a relaxed iterative algorithm by combining Korpelevich’s extragradient method, hybrid steepest-descent method, and Mann’s iteration method. We prove that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inclusions, and the solution set of general system of variational inequalities (GSVI, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm for solving a hierarchical variational inequality problem with constraints of finitely many GMEPs, finitely many variational inclusions, and the GSVI. The results obtained in this paper improve and extend the corresponding results announced by many others.
DEFF Research Database (Denmark)
Stentoft, Peter Alexander; Munk-Nielsen, Thomas; Mikkelsen, Peter Steen
2017-01-01
. The measurements may also be temporarily unavailable because of recalibration, communication faults or other errors. Here we present a method that handles such delay and missing observations. The model is based on zero order hold stochastic differential equations which use binary signals for influent flow...
Reddy, L Ram Gopal; Kuntamalla, Srinivas
2011-01-01
Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.
Tarim, S.A.; Ozen, U.; Dogru, M.K.; Rossi, R.
2011-01-01
We provide an efficient computational approach to solve the mixed integer programming (MIP) model developed by Tarim and Kingsman [8] for solving a stochastic lot-sizing problem with service level constraints under the static–dynamic uncertainty strategy. The effectiveness of the proposed method
A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-03-01
Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of
DEFF Research Database (Denmark)
Li, Baohua; Zhang, Yuanyuan; Mohammadi, Seyed Abolghasem
2016-01-01
metabolites suggesting that they may influence carbon and nitrogen partitioning, with one locus co-localizing with SUSIBA2 (WRKY78). Comparing QTLs for metabolomic and a variety of growth related traits identified few overlaps. Interestingly, the rice population displayed fewer loci controlling stochastic...
Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems
Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi
2016-03-01
A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.
Risk-based transfer responses to climate change, simulated through autocorrelated stochastic methods
Kirsch, B.; Characklis, G. W.
2009-12-01
Maintaining municipal water supply reliability despite growing demands can be achieved through a variety of mechanisms, including supply strategies such as temporary transfers. However, much of the attention on transfers has been focused on market-based transfers in the western United States largely ignoring the potential for transfers in the eastern U.S. The different legal framework of the eastern and western U.S. leads to characteristic differences between their respective transfers. Western transfers tend to be agricultural-to-urban and involve raw, untreated water, with the transfer often involving a simple change in the location and/or timing of withdrawals. Eastern transfers tend to be contractually established urban-to-urban transfers of treated water, thereby requiring the infrastructure to transfer water between utilities. Utilities require the tools to be able to evaluate transfer decision rules and the resulting expected future transfer behavior. Given the long-term planning horizons of utilities, potential changes in hydrologic patterns due to climate change must be considered. In response, this research develops a method for generating a stochastic time series that reproduces the historic autocorrelation and can be adapted to accommodate future climate scenarios. While analogous in operation to an autoregressive model, this method reproduces the seasonal autocorrelation structure, as opposed to assuming the strict stationarity produced by an autoregressive model. Such urban-to-urban transfers are designed to be rare, transient events used primarily during times of severe drought, and incorporating Monte Carlo techniques allows for the development of probability distributions of likely outcomes. This research evaluates a system risk-based, urban-to-urban transfer agreement between three utilities in the Triangle region of North Carolina. Two utilities maintain their own surface water supplies in adjoining watersheds and look to obtain transfers via
Discrete variational methods and their application to electronic structures
International Nuclear Information System (INIS)
Ellis, D.E.
1987-01-01
Some general concepts concerning Discrete Variational methods are developed and applied to problems of determination of eletronic spectra, charge densities and bonding of free molecules, surface-chemisorbed species and bulk solids. (M.W.O.) [pt
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane
2010-01-01
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation
Stochastic Drought Risk Analysis and Projection Methods For Thermoelectric Power Systems
Bekera, Behailu Belamo
Combined effects of socio-economic, environmental, technological and political factors impact fresh cooling water availability, which is among the most important elements of thermoelectric power plant site selection and evaluation criteria. With increased variability and changes in hydrologic statistical stationarity, one concern is the increased occurrence of extreme drought events that may be attributable to climatic changes. As hydrological systems are altered, operators of thermoelectric power plants need to ensure a reliable supply of water for cooling and generation requirements. The effects of climate change are expected to influence hydrological systems at multiple scales, possibly leading to reduced efficiency of thermoelectric power plants. This study models and analyzes drought characteristics from a thermoelectric systems operational and regulation perspective. A systematic approach to characterize a stream environment in relation to extreme drought occurrence, duration and deficit-volume is proposed and demonstrated. More specifically, the objective of this research is to propose a stochastic water supply risk analysis and projection methods from thermoelectric power systems operation and management perspectives. The study defines thermoelectric drought as a shortage of cooling water due to stressed supply or beyond operable water temperature limits for an extended period of time requiring power plants to reduce production or completely shut down. It presents a thermoelectric drought risk characterization framework that considers heat content and water quantity facets of adequate water availability for uninterrupted operation of such plants and safety of its surroundings. In addition, it outlines mechanisms to identify rate of occurrences of the said droughts and stochastically quantify subsequent potential losses to the sector. This mechanism is enabled through a model based on compound Nonhomogeneous Poisson Process. This study also demonstrates how
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Directory of Open Access Journals (Sweden)
Danilo ePezo
2014-11-01
Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Stochastic methods of data modeling: application to the reconstruction of non-regular data
International Nuclear Information System (INIS)
Buslig, Leticia
2014-01-01
This research thesis addresses two issues or applications related to IRSN studies. The first one deals with the mapping of measurement data (the IRSN must regularly control the radioactivity level in France and, for this purpose, uses a network of sensors distributed among the French territory). The objective is then to predict, by means of reconstruction model which used observations, maps which will be used to inform the population. The second application deals with the taking of uncertainties into account in complex computation codes (the IRSN must perform safety studies to assess the risks of loss of integrity of a nuclear reactor in case of hypothetical accidents, and for this purpose, codes are used which simulate physical phenomena occurring within an installation). Some input parameters are not precisely known, and the author therefore tries to assess the impact of some uncertainties on simulated values. She notably aims at seeing whether variations of input parameters may push the system towards a behaviour which is very different from that obtained with parameters having a reference value, or even towards a state in which safety conditions are not met. The precise objective of this second part is then to a reconstruction model which is not costly (in terms of computation time) and to perform simulation in relevant areas (strong gradient areas, threshold overrun areas, so on). Two issues are then important: the choice of the approximation model and the construction of the experiment plan. The model is based on a kriging-type stochastic approach, and an important part of the work addresses the development of new numerical techniques of experiment planning. The first part proposes a generic criterion of adaptive planning, and reports its analysis and implementation. In the second part, an alternative to error variance addition is developed. Methodological developments are tested on analytic functions, and then applied to the cases of measurement mapping and
Application of stochastic and artificial intelligence methods for nuclear material identification
International Nuclear Information System (INIS)
Pozzi, S.; Segovia, F.J.
1999-01-01
Nuclear materials safeguard efforts necessitate the use of non-destructive methods to determine the attributes of fissile samples enclosed in special, non-accessible containers. To this end, a large variety of methods has been developed at the Oak Ridge National Laboratory (ORNL) and elsewhere. Usually, a given set of statistics of the stochastic neutron-photon coupled field, such as source-detector, detector-detector cross correlation functions, and multiplicities are measured over a range of known samples to develop calibration algorithms. In this manner, the attributes of unknown samples can be inferred by the use of the calibration results. The organization of this paper is as follows: Section 2 describes the Monte Carlo simulations of source-detector cross correlation functions for a set of uranium metallic samples interrogated by the neutrons and photons from a 252 Cf source. From this database, a set of features is extracted in Section 3. The use of neural networks (NN) and genertic programming to provide sample mass and enrichment values from the input sets of features is illustrated in Sections 4 and 5, respectivelyl. Section 6 is a comparison of the results, while Section 7 is a brief summary of the work
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
International Nuclear Information System (INIS)
Deco, Gustavo; Marti, Daniel
2007-01-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Analysis and development of stochastic multigrid methods in lattice field theory
International Nuclear Information System (INIS)
Grabenstein, M.
1994-01-01
We study the relation between the dynamical critical behavior and the kinematics of stochastic multigrid algorithms. The scale dependence of acceptance rates for nonlocal Metropolis updates is analyzed with the help of an approximation formula. A quantitative study of the kinematics of multigrid algorithms in several interacting models is performed. We find that for a critical model with Hamiltonian H(Φ) absence of critical slowing down can only be expected if the expansion of (H(Φ+ψ)) in terms of the shift ψ contains no relevant term (mass term). The predictions of this rule was verified in a multigrid Monte Carlo simulation of the Sine Gordon model in two dimensions. Our analysis can serve as a guideline for the development of new algorithms: We propose a new multigrid method for nonabelian lattice gauge theory, the time slice blocking. For SU(2) gauge fields in two dimensions, critical slowing down is almost completely eliminated by this method, in accordance with the theoretical prediction. The generalization of the time slice blocking to SU(2) in four dimensions is investigated analytically and by numerical simulations. Compared to two dimensions, the local disorder in the four dimensional gauge field leads to kinematical problems. (orig.)
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Solution of problems in calculus of variations via He's variational iteration method
International Nuclear Information System (INIS)
Tatari, Mehdi; Dehghan, Mehdi
2007-01-01
In the modeling of a large class of problems in science and engineering, the minimization of a functional is appeared. Finding the solution of these problems needs to solve the corresponding ordinary differential equations which are generally nonlinear. In recent years He's variational iteration method has been attracted a lot of attention of the researchers for solving nonlinear problems. This method finds the solution of the problem without any discretization of the equation. Since this method gives a closed form solution of the problem and avoids the round off errors, it can be considered as an efficient method for solving various kinds of problems. In this research He's variational iteration method will be employed for solving some problems in calculus of variations. Some examples are presented to show the efficiency of the proposed technique
The two-regime method for optimizing stochastic reaction-diffusion simulations
Flegg, M. B.; Chapman, S. J.; Erban, R.
2011-01-01
Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches
DEFF Research Database (Denmark)
Pang, Kar Mun; Jangi, Mehdi; Bai, X.-S.
generated similar results. The principal motivation for ESF compared to Lagrangian particle based PDF is the relative ease of implementation of the former into Eulerian computational fluid dynamics(CFD) codes [5]. Several works have attempted to implement the ESF model for the simulations of diesel spray......The use of transported Probability Density Function(PDF) methods allows a single model to compute the autoignition, premixed mode and diffusion flame of diesel combustion under engine-like conditions [1,2]. The Lagrangian particle based transported PDF models have been validated across a wide range...... combustion under engine-like conditions.The current work aims to further evaluate the performance of the ESF model in this application, with an emphasis on examining the convergence of the number of stochastic fields, nsf. Five test conditions, covering both the conventional diesel combustion and low...
International Nuclear Information System (INIS)
Marhavilas, P.K.; Koulouriotis, D.E.
2012-01-01
An individual method cannot build either a realistic forecasting model or a risk assessment process in the worksites, and future perspectives should focus on the combined forecasting/estimation approach. The main purpose of this paper is to gain insight into a risk prediction and estimation methodological framework, using the combination of three different methods, including the proportional quantitative-risk-assessment technique (PRAT), the time-series stochastic process (TSP), and the method of estimating the societal-risk (SRE) by F–N curves. In order to prove the usefulness of the combined usage of stochastic and quantitative risk assessment methods, an application on an electric power provider industry is presented to, using empirical data.
A stochastic immersed boundary method for fluid-structure dynamics at microscopic length scales
International Nuclear Information System (INIS)
Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S.
2007-01-01
In modeling many biological systems, it is important to take into account flexible structures which interact with a fluid. At the length scale of cells and cell organelles, thermal fluctuations of the aqueous environment become significant. In this work, it is shown how the immersed boundary method of [C.S. Peskin, The immersed boundary method, Acta Num. 11 (2002) 1-39.] for modeling flexible structures immersed in a fluid can be extended to include thermal fluctuations. A stochastic numerical method is proposed which deals with stiffness in the system of equations by handling systematically the statistical contributions of the fastest dynamics of the fluid and immersed structures over long time steps. An important feature of the numerical method is that time steps can be taken in which the degrees of freedom of the fluid are completely underresolved, partially resolved, or fully resolved while retaining a good level of accuracy. Error estimates in each of these regimes are given for the method. A number of theoretical and numerical checks are furthermore performed to assess its physical fidelity. For a conservative force, the method is found to simulate particles with the correct Boltzmann equilibrium statistics. It is shown in three dimensions that the diffusion of immersed particles simulated with the method has the correct scaling in the physical parameters. The method is also shown to reproduce a well-known hydrodynamic effect of a Brownian particle in which the velocity autocorrelation function exhibits an algebraic (τ -3/2 ) decay for long times [B.J. Alder, T.E. Wainwright, Decay of the Velocity Autocorrelation Function, Phys. Rev. A 1(1) (1970) 18-21]. A few preliminary results are presented for more complex systems which demonstrate some potential application areas of the method. Specifically, we present simulations of osmotic effects of molecular dimers, worm-like chain polymer knots, and a basic model of a molecular motor immersed in fluid subject to a
International Nuclear Information System (INIS)
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
A stochastic post-processing method for solar irradiance forecasts derived from NWPs models
Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.
2010-09-01
Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.
International Nuclear Information System (INIS)
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
International Nuclear Information System (INIS)
Inoue, Jun-ichi
2010-01-01
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
Energy Technology Data Exchange (ETDEWEB)
Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)
2017-07-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
Energy Technology Data Exchange (ETDEWEB)
Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)
2010-07-01
Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
International Nuclear Information System (INIS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-01-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach. (paper)
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
Use of stochastic methods for robust parameter extraction from impedance spectra
International Nuclear Information System (INIS)
Bueschel, Paul; Troeltzsch, Uwe; Kanoun, Olfa
2011-01-01
The fitting of impedance models to measured data is an essential step in impedance spectroscopy (IS). Due to often complicated, nonlinear models, big number of parameters, large search spaces and presence of noise, an automated determination of the unknown parameters is a challenging task. The stronger the nonlinear behavior of a model, the weaker is the convergence of the corresponding regression and the probability to trap into local minima increases during parameter extraction. For fast measurements or automatic measurement systems these problems became the limiting factors of use. We compared the usability of stochastic algorithms, evolution, simulated annealing and particle filter with the widely used tool LEVM for parameter extraction for IS. The comparison is based on one reference model by J.R. Macdonald and a battery model used with noisy measurement data. The results show different performances of the algorithms for these two problems depending on the search space and the model used for optimization. The obtained results by particle filter were the best for both models. This method delivers the most reliable result for both cases even for the ill posed battery model.
Negrea, M.; Petrisor, I.; Shalchi, A.
2017-11-01
We study the diffusion of magnetic field lines in turbulence with magnetic shear. In the first part of the series, we developed a quasi-linear theory for this type of scenario. In this article, we employ the so-called DeCorrelation Trajectory method in order to compute the diffusion coefficients of stochastic magnetic field lines. The magnetic field configuration used here contains fluctuating terms which are described by the dimensionless functions bi(X, Y, Z), i = (x, y) and they are assumed to be Gaussian processes and are perpendicular with respect to the main magnetic field B0. Furthermore, there is also a z-component of the magnetic field depending on radial coordinate x (representing the gradient of the magnetic field) and a poloidal average component. We calculate the diffusion coefficients for magnetic field lines for different values of the magnetic Kubo number K, the dimensionless inhomogeneous magnetic parallel and perpendicular Kubo numbers KB∥, KB⊥ , as well as Ka v=bya vKB∥/KB⊥ .
Directory of Open Access Journals (Sweden)
Qinghai Zhao
2015-01-01
Full Text Available A mathematical framework is developed which integrates the reliability concept into topology optimization to solve reliability-based topology optimization (RBTO problems under uncertainty. Two typical methodologies have been presented and implemented, including the performance measure approach (PMA and the sequential optimization and reliability assessment (SORA. To enhance the computational efficiency of reliability analysis, stochastic response surface method (SRSM is applied to approximate the true limit state function with respect to the normalized random variables, combined with the reasonable design of experiments generated by sparse grid design, which was proven to be an effective and special discretization technique. The uncertainties such as material property and external loads are considered on three numerical examples: a cantilever beam, a loaded knee structure, and a heat conduction problem. Monte-Carlo simulations are also performed to verify the accuracy of the failure probabilities computed by the proposed approach. Based on the results, it is demonstrated that application of SRSM with SGD can produce an efficient reliability analysis in RBTO which enables a more reliable design than that obtained by DTO. It is also found that, under identical accuracy, SORA is superior to PMA in view of computational efficiency.
International Nuclear Information System (INIS)
Guerrier, C.; Holcman, D.
2017-01-01
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.
Directory of Open Access Journals (Sweden)
Qian Guo
2013-01-01
Full Text Available A new splitting method designed for the numerical solutions of stochastic delay Hopfield neural networks is introduced and analysed. Under Lipschitz and linear growth conditions, this split-step θ-Milstein method is proved to have a strong convergence of order 1 in mean-square sense, which is higher than that of existing split-step θ-method. Further, mean-square stability of the proposed method is investigated. Numerical experiments and comparisons with existing methods illustrate the computational efficiency of our method.
Nucleon matrix elements using the variational method in lattice QCD
International Nuclear Information System (INIS)
Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ., SA
2016-06-01
The extraction of hadron matrix elements in lattice QCD using the standard two- and threepoint correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current g_A, the scalar current g_S and the quark momentum fraction left angle x right angle of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Improved determination of hadron matrix elements using the variational method
International Nuclear Information System (INIS)
Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ.
2015-11-01
The extraction of hadron form factors in lattice QCD using the standard two- and three-point correlator functions has its limitations. One of the most commonly studied sources of systematic error is excited state contamination, which occurs when correlators are contaminated with results from higher energy excitations. We apply the variational method to calculate the axial vector current g A and compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Use of the Local Variation Methods for Nuclear Design Calculations
International Nuclear Information System (INIS)
Zhukov, A.I.
2006-01-01
A new problem-solving method for steady-state equations, which describe neutron diffusion, is presented. The method bases on a variation principal for steady-state diffusion equations and direct search the minimum of a corresponding functional. Benchmark problem calculation for power of fuel assemblies show ∼ 2% relative accuracy
Variation Iteration Method for The Approximate Solution of Nonlinear ...
African Journals Online (AJOL)
In this study, we considered the numerical solution of the nonlinear Burgers equation using the Variational Iteration Method (VIM). The method seeks to examine the convergence of solutions of the Burgers equation at the expense of the parameters x and t of which the amount of errors depends. Numerical experimentation ...
Some Implicit Methods for Solving Harmonic Variational Inequalities
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
2016-08-01
Full Text Available In this paper, we use the auxiliary principle technique to suggest an implicit method for solving the harmonic variational inequalities. It is shown that the convergence of the proposed method only needs pseudo monotonicity of the operator, which is a weaker condition than monotonicity.
Analysis of natural circulation BWR dynamics with stochastic and deterministic methods
International Nuclear Information System (INIS)
VanderHagen, T.H.; Van Dam, H.; Hoogenboom, J.E.; Kleiss, E.B.J.; Nissen, W.H.M.; Oosterkamp, W.J.
1986-01-01
Reactor kinetic, thermal hydraulic and total plant stability of a natural convection cooled BWR was studied using noise analysis and by evaluation of process responses to control rod steps and to steamflow control valve steps. An estimate of the fuel thermal time constant and an impression of the recirculation flow response to power variations was obtained. A sophisticated noise analysis method resulted in more insight into the fluctuations of the coolant velocity
Energy Technology Data Exchange (ETDEWEB)
Angstmann, C.N.; Donnelly, I.C. [School of Mathematics and Statistics, UNSW Australia, Sydney NSW 2052 (Australia); Henry, B.I., E-mail: B.Henry@unsw.edu.au [School of Mathematics and Statistics, UNSW Australia, Sydney NSW 2052 (Australia); Jacobs, B.A. [School of Computer Science and Applied Mathematics, University of the Witwatersrand, Johannesburg, Private Bag 3, Wits 2050 (South Africa); DST–NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS) (South Africa); Langlands, T.A.M. [Department of Mathematics and Computing, University of Southern Queensland, Toowoomba QLD 4350 (Australia); Nichols, J.A. [School of Mathematics and Statistics, UNSW Australia, Sydney NSW 2052 (Australia)
2016-02-15
We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also show that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.
Energy Technology Data Exchange (ETDEWEB)
Jin, Shi, E-mail: sjin@wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States); Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240 (China); Lu, Hanqing, E-mail: hanqing@math.wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States)
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (in the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.
The variational nodal method: history and recent accomplishments
International Nuclear Information System (INIS)
Lewis, E.E.
2004-01-01
The variational nodal method combines spherical harmonics expansions in angle with hybrid finite element techniques is space to obtain multigroup transport response matrix algorithms applicable to both deep penetration and reactor core physics problems. This survey briefly recounts the method's history and reviews its capabilities. The variational basis for the approach is presented and two methods for obtaining discretized equations in the form of response matrices are detailed. The first is that contained the widely used VARIANT code, while the second incorporates newly developed integral transport techniques into the variational nodal framework. The two approaches are combined with a finite sub element formulation to treat heterogeneous nodes. Applications are presented for both a deep penetration problem and to an OECD benchmark consisting of LWR MOX fuel assemblies. Ongoing work is discussed. (Author)
International Nuclear Information System (INIS)
Trindade, Bruno Machado
2011-02-01
This work shows the remodeling of the Computer System for Dosimetry of Neutrons and Photons in Radiotherapy Based on Stochastic Methods . SISCODES. The initial description and status, the alterations and expansions (proposed and concluded), and the latest system development status are shown. The SISCODES is a system that allows the execution of a 3D computational planning in radiation therapy, based on MCNP5 nuclear particle transport code. The SISCODES provides tools to build a patient's voxels model, to define a treatment planning, to simulate this planning, and to view the results of the simulation. The SISCODES implements a database of tissues, sources and nuclear data and an interface to access then. The graphical SISCODES modules were rewritten or were implemented using C++ language and GTKmm library. Studies about dose deviations were performed simulating a homogeneous water phantom as analogue of the human body in radiotherapy planning and a heterogeneous voxel phantom, pointing out possible dose miscalculations. The Soft-RT and PROPLAN computer codes that do interface with SISCODES are described. A set of voxels models created on the SISCODES are presented with its respective sizes and resolutions. To demonstrate the use of SISCODES, examples of radiation therapy and dosimetry simulations for prostate and heart are shown. Three protocols were simulated on the heart voxel model: Sm-153 filled balloon and P-32 stent, to prevent angioplasty restenosis; and Tl-201 myocardial perfusion, to imaging. Teletherapy with 6MV and 15MV beams were simulated to the prostate, and brachytherapy with I-125 seeds. The results of these simulations are shown on isodose curves and on dose-volume histograms. The SISCODES shows to be a useful tool for research of new radiation therapy treatments and, in future, can also be useful in medical practice. At the end, future improvements are proposed. I hope this work can contribute to develop more effective radiation therapy
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-08-15
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.
International Nuclear Information System (INIS)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man
2015-01-01
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media
SLUG-STOCHASTICALLY LIGHTING UP GALAXIES. I. METHODS AND VALIDATING TESTS
Energy Technology Data Exchange (ETDEWEB)
Da Silva, Robert L.; Fumagalli, Michele; Krumholz, Mark [Department of Astronomy and Astrophysics, UCO/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States)
2012-02-01
The effects of stochasticity on the luminosities of stellar populations are an often neglected but crucial element for understanding populations in the low-mass or the low star formation rate regime. To address this issue, we present SLUG, a new code to 'Stochastically Light Up Galaxies'. SLUG synthesizes stellar populations using a Monte Carlo technique that properly treats stochastic sampling including the effects of clustering, the stellar initial mass function, star formation history, stellar evolution, and cluster disruption. This code produces many useful outputs, including (1) catalogs of star clusters and their properties such as their stellar initial mass distributions and their photometric properties in a variety of filters, (2) two dimensional histograms of color-magnitude diagrams of every star in the simulation, and (3) the photometric properties of field stars and the integrated photometry of the entire simulated galaxy. After presenting the SLUG algorithm in detail, we validate the code through comparisons with STARBURST99 in the well-sampled regime, and with observed photometry of Milky Way clusters. Finally, we demonstrate SLUG's capabilities by presenting outputs in the stochastic regime. SLUG is publicly distributed through the Web site http://sites.google.com/site/runslug/.
Path integral methods for the dynamics of stochastic and disordered systems
DEFF Research Database (Denmark)
Hertz, John A.; Roudi, Yasser; Sollich, Peter
2017-01-01
We review some of the techniques used to study the dynamics of disordered systems subject to both quenched and fast (thermal) noise. Starting from the Martin–Siggia–Rose/Janssen–De Dominicis–Peliti path integral formalism for a single variable stochastic dynamics, we provide a pedagogical survey...
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Czech Academy of Sciences Publication Activity Database
Liao, S.; Vejchodský, Tomáš; Erban, R.
2015-01-01
Roč. 12, č. 108 (2015), s. 20150233 ISSN 1742-5689 EU Projects: European Commission(XE) 328008 - STOCHDETBIOMODEL Institutional support: RVO:67985840 Keywords : gene regulatory networks * stochastic modelling * parametric analysis Subject RIV: BA - General Mathematics Impact factor: 3.818, year: 2015 http://rsif.royalsocietypublishing.org/content/12/108/20150233
Jiang, George J.; Sluis, Pieter J. van der
1999-01-01
While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
Stochastic volatility and stochastic leverage
DEFF Research Database (Denmark)
Veraart, Almut; Veraart, Luitgard A. M.
This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...
Quilty, J.; Adamowski, J. F.
2015-12-01
Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.
Dimitriadis, Panayiotis; Lazaros, Lappas; Daskalou, Olympia; Filippidou, Ariadni; Giannakou, Marianna; Gkova, Eleni; Ioannidis, Romanos; Polydera, Angeliki; Polymerou, Eleni; Psarrou, Eleftheria; Vyrini, Alexandra; Papalexiou, Simon; Koutsoyiannis, Demetris
2015-04-01
Several methods exist for estimating the statistical properties of wind speed, most of them being deterministic or probabilistic, disregarding though its long-term behaviour. Here, we focus on the stochastic nature of wind. After analyzing several historical timeseries at the area of interest (AoI) in Thessaly (Greece), we show that a Hurst-Kolmogorov (HK) behaviour is apparent. Thus, disregarding the latter could lead to unrealistic predictions and wind load situations, causing some impact on the energy production and management. Moreover, we construct a stochastic model capable of preserving the HK behaviour and we produce synthetic timeseries using a Monte-Carlo approach to estimate the future wind loads in the AoI. Finally, we identify the appropriate types of wind turbines for the AoI (based on the IEC 61400 standards) and propose several industrial solutions. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Environmental vs Demographic Stochasticity in Population Growth
Braumann, C. A.
2010-01-01
Compares the effect on population growth of envinonmental stochasticity (random environmental variations described by stochastic differential equations) with demographic stochasticity (random variations in births and deaths described by branching processes and birth-and-death processes), in the density-independent and the density-dependent cases.
DEFF Research Database (Denmark)
Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie
2016-01-01
variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...
Application of New Variational Homotopy Perturbation Method For ...
African Journals Online (AJOL)
This paper discusses the application of the New Variational Homotopy Perturbation Method (NVHPM) for solving integro-differential equations. The advantage of the new Scheme is that it does not require discretization, linearization or any restrictive assumption of any form be fore it is applied. Several test problems are ...
Discrete gradient methods for solving variational image regularisation models
International Nuclear Information System (INIS)
Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B
2017-01-01
Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)
Variational iteration method for solving coupled-KdV equations
International Nuclear Information System (INIS)
Assas, Laila M.B.
2008-01-01
In this paper, the He's variational iteration method is applied to solve the non-linear coupled-KdV equations. This method is based on the use of Lagrange multipliers for identification of optimal value of a parameter in a functional. This technique provides a sequence of functions which converge to the exact solution of the coupled-KdV equations. This procedure is a powerful tool for solving coupled-KdV equations
Molecular photoionization using the complex Kohn variational method
International Nuclear Information System (INIS)
Lynch, D.L.; Schneider, B.I.
1992-01-01
We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts
Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.
2017-11-01
This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.
Moments of inertia for solids of revolution and variational methods
International Nuclear Information System (INIS)
Diaz, Rodolfo A; Herrera, William J; Martinez, R
2006-01-01
We present some formulae for the moments of inertia of homogeneous solids of revolution in terms of the functions that generate the solids. The development of these expressions exploits the cylindrical symmetry of these objects and avoids the explicit use of multiple integration, providing an easy and pedagogical approach. The explicit use of the functions that generate the solid gives the possibility of writing the moment of inertia as a functional, which in turn allows us to utilize the calculus of variations to obtain new insight into some properties of this fundamental quantity. In particular, minimization of moments of inertia under certain restrictions is possible by using variational methods
Elastic scattering of positronium: Application of the confined variational method
Zhang, Junyi
2012-08-01
We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.
Elastic scattering of positronium: Application of the confined variational method
Zhang, Junyi; Yan, Zong-Chao; Schwingenschlö gl, Udo
2012-01-01
We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.
Minimizers with discontinuous velocities for the electromagnetic variational method
International Nuclear Information System (INIS)
De Luca, Jayme
2010-01-01
The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Lienard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.
The variational nodal method: some history and recent activity
International Nuclear Information System (INIS)
Lewis, E.E.; Smith, M.A.; Palmiotti, G.
2005-01-01
The variational nodal method combines spherical harmonics expansions in angle with hybrid finite element techniques in space to obtain multigroup transport response matrix algorithms applicable to a wide variety of reactor physics problems. This survey briefly recounts the method's history and reviews its capabilities. Two methods for obtaining discretized equations in the form of response matrices are compared. The first is that contained the widely used VARIANT code, while the second incorporates more recently developed integral transport techniques into the variational nodal framework. The two approaches are combined with a finite sub-element formulation to treat heterogeneous nodes. Results are presented for application to a deep penetration problem and to an OECD benchmark consisting of LWR Mox fuel assemblies. Ongoing work is discussed. (authors)
Energy Technology Data Exchange (ETDEWEB)
Syben, Olaf; Dehery, Francois-Regis [ProCom GmbH, Aachen (Germany)
2012-07-01
Uncertainties must be considered for successful portfolio management in the energy markets. This is not only a structural aspect of the increasingly complex interdependences between raw materials, technical assets and economic considerations but also covers the distortion of degrees of freedom in time, which require ever faster decision processes. Stochastic methods make it possible to judge uncertainties even in complex planning problems. Integration of the mathematical methods in an efficient IT environment ensures that a verifiable decision basis is available at any time. (orig./AKB)
A game-theoretic method for cross-layer stochastic resilient control design in CPS
Shen, Jiajun; Feng, Dongqin
2018-03-01
In this paper, the cross-layer security problem of cyber-physical system (CPS) is investigated from the game-theoretic perspective. Physical dynamics of plant is captured by stochastic differential game with cyber-physical influence being considered. The sufficient and necessary condition for the existence of state-feedback equilibrium strategies is given. The attack-defence cyber interactions are formulated by a Stackelberg game intertwined with stochastic differential game in physical layer. The condition such that the Stackelberg equilibrium being unique and the corresponding analytical solutions are both provided. An algorithm is proposed for obtaining hierarchical security strategy by solving coupled games, which ensures the operational normalcy and cyber security of CPS subject to uncertain disturbance and unexpected cyberattacks. Simulation results are given to show the effectiveness and performance of the proposed algorithm.
Optimization of advanced gas-cooled reactor fuel performance by a stochastic method
International Nuclear Information System (INIS)
Parks, G.T.
1987-01-01
A brief description is presented of a model representing the in-core behaviour of a single advanced gas-cooled reactor fuel channel, developed specifically for optimization studies. The performances of the only suitable Numerical Algorithms Group (NAG) library package and a Metropolis algorithm routine on this problem are discussed and contrasted. It is concluded that, for the problem in question, the stochastic Metropolis algorithm has distinct advantages over the deterministic NAG routine. (author)
Generalization of uncertainty relation for quantum and stochastic systems
Koide, T.; Kodama, T.
2018-06-01
The generalized uncertainty relation applicable to quantum and stochastic systems is derived within the stochastic variational method. This relation not only reproduces the well-known inequality in quantum mechanics but also is applicable to the Gross-Pitaevskii equation and the Navier-Stokes-Fourier equation, showing that the finite minimum uncertainty between the position and the momentum is not an inherent property of quantum mechanics but a common feature of stochastic systems. We further discuss the possible implication of the present study in discussing the application of the hydrodynamic picture to microscopic systems, like relativistic heavy-ion collisions.
Gottwald, G.A.; Crommelin, D.T.; Franzke, C.L.E.; Franzke, C.L.E.; O'Kane, T.J.
2017-01-01
In this chapter we review stochastic modelling methods in climate science. First we provide a conceptual framework for stochastic modelling of deterministic dynamical systems based on the Mori-Zwanzig formalism. The Mori-Zwanzig equations contain a Markov term, a memory term and a term suggestive of
Energy Technology Data Exchange (ETDEWEB)
Wang, Yishen [Univ. of Washington, Seattle, WA (United States); Argonne National Lab. (ANL), Argonne, IL (United States); Zhou, Zhi [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, Cong [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, Audun [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides a reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.
DEFF Research Database (Denmark)
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song
2017-01-01
The present numerical study aims to assess the performance of an Eulerian Stochastic Field (ESF) model in simulating spray flames produced by three fuel injectors with different nozzle diameters of 100 μm, 180 μm and 363 μm. A comparison to the measurements shows that although the simulated ignit...... serve as an important tool for the simulation of spray flames in marine diesel engines, where fuel injectors with different nozzle diameters are applied for pilot and main injections.......The present numerical study aims to assess the performance of an Eulerian Stochastic Field (ESF) model in simulating spray flames produced by three fuel injectors with different nozzle diameters of 100 μm, 180 μm and 363 μm. A comparison to the measurements shows that although the simulated...... ignition delay times are consistently overestimated, the relative differences remain below 28%. Furthermore, the change of the averaged pressure rise with respect to the variation of nozzle diameter is captured by the model. The simulated flame lift-off lengths also agree with the measurements...
Variationally derived coarse mesh methods using an alternative flux representation
International Nuclear Information System (INIS)
Wojtowicz, G.; Holloway, J.P.
1995-01-01
Investigation of a previously reported variational technique for the solution of the 1-D, 1-group neutron transport equation in reactor lattices has inspired the development of a finite element formulation of the method. Compared to conventional homogenization methods in which node homogenized cross sections are used, the coefficients describing this system take on greater spatial dependence. However, the methods employ an alternative flux representation which allows the transport equation to be cast into a form whose solution has only a slow spatial variation and, hence, requires relatively few variables to describe. This alternative flux representation and the stationary property of a variational principle define a class of coarse mesh discretizations of transport theory capable of achieving order of magnitude reductions of eigenvalue and pointwise scalar flux errors as compared with diffusion theory while retaining diffusion theory's relatively low cost. Initial results of a 1-D spectral element approach are reviewed and used to motivate the finite element implementation which is more efficient and almost as accurate; one and two group results of this method are described
THE CONTROL VARIATIONAL METHOD FOR ELASTIC CONTACT PROBLEMS
Directory of Open Access Journals (Sweden)
Mircea Sofonea
2010-07-01
Full Text Available We consider a multivalued equation of the form Ay + F(y = fin a real Hilbert space, where A is a linear operator and F represents the (Clarke subdifferential of some function. We prove existence and uniqueness results of the solution by using the control variational method. The main idea in this method is to minimize the energy functional associated to the nonlinear equation by arguments of optimal control theory. Then we consider a general mathematical model describing the contact between a linearly elastic body and an obstacle which leads to a variational formulation as above, for the displacement field. We apply the abstract existence and uniqueness results to prove the unique weak solvability of the corresponding contact problem. Finally, we present examples of contact and friction laws for which our results work.
The variational method in the atomic structure calcularion
International Nuclear Information System (INIS)
Tomimura, A.
1970-01-01
The importance and limitations of variational methods on the atomic structure calculations is set into relevance. Comparisons are made to the Perturbation Theory. Ilustrating it, the method is applied to the H - , H + and H + 2 simple atomic structure systems, and the results are analysed with basis on the study of the associated essential eigenvalue spectrum. Hydrogenic functions (where the screening constants are replaced by variational parameters) are combined to construct the wave function with proper symmetry for each one of the systems. This shows the existence of a bound state for H - , but no conclusions can be made for the others, where it may or may not be necessary to use more flexible wave functions, i.e., with greater number of terms and parameters. (author) [pt
A variational method in out-of-equilibrium physical systems.
Pinheiro, Mario J
2013-12-09
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices.
International Nuclear Information System (INIS)
Chou, Jui-Sheng; Ongkowijoyo, Citra Satria
2015-01-01
Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results
Valent, Peter; Paquet, Emmanuel
2017-09-01
A reliable estimate of extreme flood characteristics has always been an active topic in hydrological research. Over the decades a large number of approaches and their modifications have been proposed and used, with various methods utilizing continuous simulation of catchment runoff, being the subject of the most intensive research in the last decade. In this paper a new and promising stochastic semi-continuous method is used to estimate extreme discharges in two mountainous Slovak catchments of the rivers Váh and Hron, in which snow-melt processes need to be taken into account. The SCHADEX method used, couples a precipitation probabilistic model with a rainfall-runoff model used to both continuously simulate catchment hydrological conditions and to transform generated synthetic rainfall events into corresponding discharges. The stochastic nature of the method means that a wide range of synthetic rainfall events were simulated on various historical catchment conditions, taking into account not only the saturation of soil, but also the amount of snow accumulated in the catchment. The results showed that the SCHADEX extreme discharge estimates with return periods of up to 100 years were comparable to those estimated by statistical approaches. In addition, two reconstructed historical floods with corresponding return periods of 100 and 1000 years were compared to the SCHADEX estimates. The results confirmed the usability of the method for estimating design discharges with a recurrence interval of more than 100 years and its applicability in Slovak conditions.
International Nuclear Information System (INIS)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-01-01
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided
Variational method for magnetic impurities in metals: impurity pairs
Energy Technology Data Exchange (ETDEWEB)
Oles, A M [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany, F.R.); Chao, K A [Linkoeping Univ. (Sweden). Dept. of Physics and Measurement Technology
1980-01-01
Applying a variational method to the generalized Wolff model, we have investigated the effect of impurity-impurity interaction on the formation of local moments in the ground state. The direct coupling between the impurities is found to be more important than the interaction between the impurities and the host conduction electrons, as far as the formation of local moments is concerned. Under certain conditions we also observe different valences on different impurities.
Energy Technology Data Exchange (ETDEWEB)
Suescun D, D.; Oviedo T, M., E-mail: daniel.suescun@usco.edu.co [Universidad Surcolombiana, Av. Pastrana Borrero - Carrera 1, Neiva, Huila (Colombia)
2017-09-15
In this paper, a numerical study of stochastic differential equations that describe the kinetics in a nuclear reactor is presented. These equations, known as the stochastic equations of punctual kinetics they model temporal variations in neutron population density and concentrations of deferred neutron precursors. Because these equations are probabilistic in nature (since random oscillations in the neutrons and population of precursors were considered to be approximately normally distributed, and these equations also possess strong coupling and stiffness properties) the proposed method for the numerical simulations is the Euler-Maruyama scheme that provides very good approximations for calculating the neutron population and concentrations of deferred neutron precursors. The method proposed for this work was computationally tested for different seeds, initial conditions, experimental data and forms of reactivity for a group of precursors and then for six groups of deferred neutron precursors at each time step with 5000 Brownian movements per seed. In a paper reported in the literature, the Euler-Maruyama method was proposed, but there are many doubts about the reported values, in addition to not reporting the seed used, so in this work is expected to rectify the reported values. After taking the average of the different seeds used to generate the pseudo-random numbers the results provided by the Euler-Maruyama scheme will be compared in mean and standard deviation with other methods reported in the literature and results of the deterministic model of the equations of the punctual kinetics. This comparison confirms in particular that the Euler-Maruyama scheme is an efficient method to solve the equations of stochastic point kinetics but different from the values found and reported by another author. The Euler-Maruyama method is simple and easy to implement, provides acceptable results for neutron population density and concentration of deferred neutron precursors and
The variational method in quantum mechanics: an elementary introduction
Borghi, Riccardo
2018-05-01
Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary way, is illustrated. No previous knowledge of calculus of variations is required. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: ‘completion of square’ and integration by parts. This makes our approach particularly suitable for undergraduates. Moreover, the key role played by particle localization is emphasized through the entire analysis. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. Such an unexpected connection is outlined in the final part of the paper.
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Variational-moment method for computing magnetohydrodynamic equilibria
International Nuclear Information System (INIS)
Lao, L.L.
1983-08-01
A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed
International Nuclear Information System (INIS)
Bliokh, Yu.P.; Fajnberg, Ya.B.; Lyubarskij, M.G.; Podobinskij, V.O.
1994-01-01
Certain distributed dynamical systems describing the well-known beam generators of UHF oscillations are organized very simple: the nonlinear functional, which determines the current state of the system with respect to its behaviour in the past, is represented as a composition of the linear functional and the nonlinear finite-dimensional map. This property made it possible to find the mechanisms of auto modulation and stochastization of the signals from beam generators and to define corresponding range of parameters values. 12 refs., 6 figs
Homogenization of the stochastic Navier–Stokes equation with a stochastic slip boundary condition
Bessaih, Hakima
2015-11-02
The two-dimensional Navier–Stokes equation in a perforated domain with a dynamical slip boundary condition is considered. We assume that the dynamic is driven by a stochastic perturbation on the interior of the domain and another stochastic perturbation on the boundaries of the holes. We consider a scaling (ᵋ for the viscosity and 1 for the density) that will lead to a time-dependent limit problem. However, the noncritical scaling (ᵋ, β > 1) is considered in front of the nonlinear term. The homogenized system in the limit is obtained as a Darcy’s law with memory with two permeabilities and an extra term that is due to the stochastic perturbation on the boundary of the holes. The nonhomogeneity on the boundary contains a stochastic part that yields in the limit an additional term in the Darcy’s law. We use the two-scale convergence method after extending the solution with 0 inside the holes to pass to the limit. By Itô stochastic calculus, we get uniform estimates on the solution in appropriate spaces. Due to the stochastic integral, the pressure that appears in the variational formulation does not have enough regularity in time. This fact made us rely only on the variational formulation for the passage to the limit on the solution. We obtain a variational formulation for the limit that is solution of a Stokes system with two pressures. This two-scale limit gives rise to three cell problems, two of them give the permeabilities while the third one gives an extra term in the Darcy’s law due to the stochastic perturbation on the boundary of the holes.
Storm surge model based on variational data assimilation method
Directory of Open Access Journals (Sweden)
Shi-li Huang
2010-06-01
Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Newton-type methods for optimization and variational problems
Izmailov, Alexey F
2014-01-01
This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...
Introduction to stochastic calculus
Karandikar, Rajeeva L
2018-01-01
This book sheds new light on stochastic calculus, the branch of mathematics that is most widely applied in financial engineering and mathematical finance. The first book to introduce pathwise formulae for the stochastic integral, it provides a simple but rigorous treatment of the subject, including a range of advanced topics. The book discusses in-depth topics such as quadratic variation, Ito formula, and Emery topology. The authors briefly address continuous semi-martingales to obtain growth estimates and study solution of a stochastic differential equation (SDE) by using the technique of random time change. Later, by using Metivier–Pellumail inequality, the solutions to SDEs driven by general semi-martingales are discussed. The connection of the theory with mathematical finance is briefly discussed and the book has extensive treatment on the representation of martingales as stochastic integrals and a second fundamental theorem of asset pricing. Intended for undergraduate- and beginning graduate-level stud...
Directory of Open Access Journals (Sweden)
Lin Hu
2011-01-01
Full Text Available A class of drift-implicit one-step schemes are proposed for the neutral stochastic delay differential equations (NSDDEs driven by Poisson processes. A general framework for mean-square convergence of the methods is provided. It is shown that under certain conditions global error estimates for a method can be inferred from estimates on its local error. The applicability of the mean-square convergence theory is illustrated by the stochastic θ-methods and the balanced implicit methods. It is derived from Theorem 3.1 that the order of the mean-square convergence of both of them for NSDDEs with jumps is 1/2. Numerical experiments illustrate the theoretical results. It is worth noting that the results of mean-square convergence of the stochastic θ-methods and the balanced implicit methods are also new.
Directory of Open Access Journals (Sweden)
Peng Li
2017-01-01
Full Text Available According to the case-based reasoning method and prospect theory, this paper mainly focuses on finding a way to obtain decision-makers’ preferences and the criterion weights for stochastic multicriteria decision-making problems and classify alternatives. Firstly, we construct a new score function for an intuitionistic fuzzy number (IFN considering the decision-making environment. Then, we aggregate the decision-making information in different natural states according to the prospect theory and test decision-making matrices. A mathematical programming model based on a case-based reasoning method is presented to obtain the criterion weights. Moreover, in the original decision-making problem, we integrate all the intuitionistic fuzzy decision-making matrices into an expectation matrix using the expected utility theory and classify or rank the alternatives by the case-based reasoning method. Finally, two illustrative examples are provided to illustrate the implementation process and applicability of the developed method.
AESS: Accelerated Exact Stochastic Simulation
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution
Greenwood, Priscilla E
2016-01-01
This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...
Investigation on generalized Variational Nodal Methods for heterogeneous nodes
International Nuclear Information System (INIS)
Wang, Yongping; Wu, Hongchun; Li, Yunzhao; Cao, Liangzhi; Shen, Wei
2017-01-01
Highlights: • We developed two heterogeneous nodal methods based on the Variational Nodal Method. • Four problems were solved to evaluate the two heterogeneous nodal methods. • The function expansion method is good at treating continuous-changing heterogeneity. • The finite sub-element method is good at treating discontinuous-changing heterogeneity. - Abstract: The Variational Nodal Method (VNM) is generalized for heterogeneous nodes and applied to four kinds of problems including Molten Salt Reactor (MSR) core problem with continuous cross section profile, Pressurized Water Reactor (PWR) control rod cusping effect problem, PWR whole-core pin-by-pin problem, and heterogeneous PWR core problem without fuel-coolant homogenization in each pin cell. Two approaches have been investigated for the treatment of the nodal heterogeneity in this paper. To concentrate on spatial heterogeneity, diffusion approximation was adopted for the angular variable in neutron transport equation. To provide demonstrative numerical results, the codes in this paper were developed in slab geometry. The first method, named as function expansion (FE) method, expands nodal flux by orthogonal polynomials and the nodal cross sections are also expressed as spatial depended functions. The second path, named as finite sub-element (FS) method, takes advantage of the finite-element method by dividing each node into numbers of homogeneous sub-elements and expanding nodal flux into the combination of linear sub-element trial functions. Numerical tests have been carried out to evaluate the ability of the two nodal (coarse-mesh) heterogeneous VNMs by comparing with the fine-mesh homogeneous VNM. It has been demonstrated that both heterogeneous approaches can handle heterogeneous nodes. The FE method is good at continuous-changing heterogeneity as in the MSR core problem, while the FS method is good at discontinuous-changing heterogeneity such as the PWR pin-by-pin problem and heterogeneous PWR core
An integral nodal variational method for multigroup criticality calculations
International Nuclear Information System (INIS)
Lewis, E.E.; Tsoulfanidis, N.
2003-01-01
An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)
Equivalence of the generalized and complex Kohn variational methods
Energy Technology Data Exchange (ETDEWEB)
Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)
2010-04-30
For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Equivalence of the generalized and complex Kohn variational methods
International Nuclear Information System (INIS)
Cooper, J N; Armour, E A G; Plummer, M
2010-01-01
For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.
Total variation superiorized conjugate gradient method for image reconstruction
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
Hanamoto, Seiya; Nakada, Norihide; Yamashita, Naoyuki; Tanaka, Hiroaki
2013-01-01
Existing stochastic models for predicting concentrations of down-the-drain chemicals in aquatic environments do not account for the diurnal variation of direct photolysis by sunlight, despite its being an important factor in natural attenuation. To overcome this limitation, we developed a stochastic model incorporating temporal variations in direct photolysis. To verify the model, we measured 57 pharmaceuticals and personal care products (PPCPs) in a 7.6-km stretch of an urban river, and determined their physical and biological properties in laboratory experiments. During transport along the river, 8 PPCPs, including ketoprofen and azithromycin, were attenuated by >20%, mainly owing to direct photolysis and adsorption to sediments. The photolabile PPCPs attenuated significantly in the daytime but persisted in the nighttime. The observations were similar to the values predicted by the photolysis model for the photolabile PPCPs (i.e., ketoprofen, diclofenac and furosemide) but not by the existing model. The stochastic model developed in this study was suggested to be a novel and useful stochastic model for evaluating direct photolysis of down-the-drain chemicals, which occurs during the river transport.
Path integral methods for the dynamics of stochastic and disordered systems
International Nuclear Information System (INIS)
Hertz, John A; Roudi, Yasser; Sollich, Peter
2017-01-01
We review some of the techniques used to study the dynamics of disordered systems subject to both quenched and fast (thermal) noise. Starting from the Martin–Siggia–Rose/Janssen–De Dominicis–Peliti path integral formalism for a single variable stochastic dynamics, we provide a pedagogical survey of the perturbative, i.e. diagrammatic, approach to dynamics and how this formalism can be used for studying soft spin models. We review the supersymmetric formulation of the Langevin dynamics of these models and discuss the physical implications of the supersymmetry. We also describe the key steps involved in studying the disorder-averaged dynamics. Finally, we discuss the path integral approach for the case of hard Ising spins and review some recent developments in the dynamics of such kinetic Ising models. (topical review)
Functional Abstraction of Stochastic Hybrid Systems
Bujorianu, L.M.; Blom, Henk A.P.; Hermanns, H.
2006-01-01
The verification problem for stochastic hybrid systems is quite difficult. One method to verify these systems is stochastic reachability analysis. Concepts of abstractions for stochastic hybrid systems are needed to ease the stochastic reachability analysis. In this paper, we set up different ways
Novel crystal timing calibration method based on total variation
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
A variational Bayesian method to inverse problems with impulsive noise
Jin, Bangti
2012-01-01
We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.
International Nuclear Information System (INIS)
Xie, Y.L.; Li, Y.P.; Huang, G.H.; Li, Y.F.
2010-01-01
In this study, an interval fixed-mix stochastic programming (IFSP) model is developed for greenhouse gas (GHG) emissions reduction management under uncertainties. In the IFSP model, methods of interval-parameter programming (IPP) and fixed-mix stochastic programming (FSP) are introduced into an integer programming framework, such that the developed model can tackle uncertainties described in terms of interval values and probability distributions over a multi-stage context. Moreover, it can reflect dynamic decisions for facility-capacity expansion during the planning horizon. The developed model is applied to a case of planning GHG-emission mitigation, demonstrating that IFSP is applicable to reflecting complexities of multi-uncertainty, dynamic and interactive energy management systems, and capable of addressing the problem of GHG-emission reduction. A number of scenarios corresponding to different GHG-emission mitigation levels are examined; the results suggest that reasonable solutions have been generated. They can be used for generating plans for energy resource/electricity allocation and capacity expansion and help decision makers identify desired GHG mitigation policies under various economic costs and environmental requirements.
Stochastic processes in cell biology
Bressloff, Paul C
2014-01-01
This book develops the theory of continuous and discrete stochastic processes within the context of cell biology. A wide range of biological topics are covered including normal and anomalous diffusion in complex cellular environments, stochastic ion channels and excitable systems, stochastic calcium signaling, molecular motors, intracellular transport, signal transduction, bacterial chemotaxis, robustness in gene networks, genetic switches and oscillators, cell polarization, polymerization, cellular length control, and branching processes. The book also provides a pedagogical introduction to the theory of stochastic process – Fokker Planck equations, stochastic differential equations, master equations and jump Markov processes, diffusion approximations and the system size expansion, first passage time problems, stochastic hybrid systems, reaction-diffusion equations, exclusion processes, WKB methods, martingales and branching processes, stochastic calculus, and numerical methods. This text is primarily...
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
Energy Technology Data Exchange (ETDEWEB)
Unkelbach, J. [Massachusetts General Hospital (United States)
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. The treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand
Stochastic resonance in a single-mode laser driven by frequency modulated signal and coloured noises
Institute of Scientific and Technical Information of China (English)
Jin Guo-Xiang; Zhang Liang-Ying; Cao Li
2009-01-01
By adding frequency modulated signals to the intensity equation of gain-noise model of the single-mode laser driven by two coloured noises which are correlated, this paper uses the linear approximation method to calculate the power spectrum and signal-to-noise ratio (SNR) of the laser intensity. The results show that the SNR appears typical stochastic resonance with the variation of intensity of the pump noise and quantum noise. As the amplitude of a modulated signal has effects on the SNR, it shows suppression, monotone increasing, stochastic resonance, and multiple stochastic resonance with the variation of the frequency of a carrier signal and modulated signal.
A Total Variation-Based Reconstruction Method for Dynamic MRI
Directory of Open Access Journals (Sweden)
Germana Landi
2008-01-01
Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.
The Cluster Variation Method: A Primer for Neuroscientists.
Maren, Alianna J
2016-09-30
Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
The Cluster Variation Method: A Primer for Neuroscientists
Directory of Open Access Journals (Sweden)
Alianna J. Maren
2016-09-01
Full Text Available Effective Brain–Computer Interfaces (BCIs require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h for the case of an equiprobable distribution of bistate (neural/neural ensemble units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
Variational principles for Ginzburg-Landau equation by He's semi-inverse method
International Nuclear Information System (INIS)
Liu, W.Y.; Yu, Y.J.; Chen, L.D.
2007-01-01
Via the semi-inverse method of establishing variational principles proposed by He, a generalized variational principle is established for Ginzburg-Landau equation. The present theory provides a quite straightforward tool to the search for various variational principles for physical problems. This paper aims at providing a more complete theoretical basis for applications using finite element and other direct variational methods
Space-angle approximations in the variational nodal method
International Nuclear Information System (INIS)
Lewis, E. E.; Palmiotti, G.; Taiwo, T.
1999-01-01
The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared
Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization
Fornasier, Massimo
2009-01-01
This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.
Variational methods for high-order multiphoton processes
International Nuclear Information System (INIS)
Gao, B.; Pan, C.; Liu, C.; Starace, A.F.
1990-01-01
Methods for applying the variationally stable procedure for Nth-order perturbative transition matrix elements of Gao and Starace [Phys. Rev. Lett. 61, 404 (1988); Phys. Rev. A 39, 4550 (1989)] to multiphoton processes involving systems other than atomic H are presented. Three specific cases are discussed: one-electron ions or atoms in which the electron--ion interaction is described by a central potential; two-electron ions or atoms in which the electronic states are described by the adiabatic hyperspherical representation; and closed-shell ions or atoms in which the electronic states are described by the multiconfiguration Hartree--Fock representation. Applications are made to the dynamic polarizability of He and the two-photon ionization cross section of Ar
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Ehrhardt, Matthias J.
2017-08-24
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
International Nuclear Information System (INIS)
Hueffel, H.
1990-01-01
After a brief review of the BRST formalism and of the Parisi-Wu stochastic quantization method we introduce the BRST stochastic quantization scheme. It allows the second quantization of constrained Hamiltonian systems in a manifestly gauge symmetry preserving way. The examples of the relativistic particle, the spinning particle and the bosonic string are worked out in detail. The paper is closed by a discussion on the interacting field theory associated to the relativistic point particle system. 58 refs. (Author)
International Nuclear Information System (INIS)
Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes
International Nuclear Information System (INIS)
Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.
2016-01-01
LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.
Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”
Directory of Open Access Journals (Sweden)
Ji-Huan He
2012-01-01
boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.
Spectrographical method for determining temperature variations of cosmic rays
International Nuclear Information System (INIS)
Dorman, L.I.; Krest'yannikov, Yu.Ya.; AN SSSR, Irkutsk. Sibirskij Inst. Zemnogo Magnetizma Ionosfery i Rasprostraneniya Radiovoln)
1977-01-01
A spectrographic method for determining [sigmaJsup(μ)/Jsup(μ)]sub(T) temperature variations in cosmic rays is proposed. The value of (sigmaJsup(μ)/Jsup(μ)]sub(T) is determined from three equations for neutron supermonitors and the equation for the muon component of cosmic rays. It is assumed that all the observation data include corrections for the barometric effect. No temperature effect is observed in the neutron component. To improve the reliability and accuracy of the results obtained the surface area of the existing devices and the number of spectrographic equations should be increased as compared with that of the unknown values. The value of [sigmaJsup(μ)/Jsup(μ)]sub(T) for time instants when the aerological probing was carried out, was determined from the data of observations of cosmic rays with the aid of a spectrographic complex of devices of Sib IZMIR. The r.m.s. dispersion of the difference is about 0.2%, which agrees with the expected dispersion. The agreement obtained can be regarded as an independent proof of the correctness of the theory of meteorological effects of cosmic rays. With the existing detection accuracy the spectrographic method can be used for determining the hourly values of temperature corrections for the muon component
Variational methods in electron-atom scattering theory
Nesbet, Robert K
1980-01-01
The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...
DEFF Research Database (Denmark)
Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song
2018-01-01
This paper aims to simulate diesel spray flames across a wide range of engine-like conditions using the Eulerian Stochastic Field probability density function (ESF-PDF) model. The ESF model is coupled with the Chemistry Coordinate Mapping approach to expedite the calculation. A convergence study...... is carried out for a number of stochastic fields at five different conditions, covering both conventional diesel combustion and low-temperature combustion regimes. Ignition delay time, flame lift-off length as well as distributions of temperature and various combustion products are used to evaluate...... the performance of the model. The peak values of these properties generated using thirty-two stochastic fields are found to converge, with a maximum relative difference of 27% as compared to those from a greater number of stochastic fields. The ESF-PDF model with thirty-two stochastic fields performs reasonably...
Variational principle in quantum mechanics
International Nuclear Information System (INIS)
Popiez, L.
1986-01-01
The variational principle in a standard, path integral formulation of quantum mechanics (as proposed by Dirac and Feynman) appears only in the context of a classical limit n to 0 and manifests itself through the method of abstract stationary phase. Symbolically it means that a probability amplitude averaged over trajectories denotes a classical evolution operator for points in a configuration space. There exists, however, the formulation of quantum dynamics in which variational priniple is one of basic postulates. It is explained that the translation between stochastic and quantum mechanics in this case can be understood as in Nelson's stochastic mechanics
Gauge-invariant variational methods for Hamiltonian lattice gauge theories
International Nuclear Information System (INIS)
Horn, D.; Weinstein, M.
1982-01-01
This paper develops variational methods for calculating the ground-state and excited-state spectrum of Hamiltonian lattice gauge theories defined in the A 0 = 0 gauge. The scheme introduced in this paper has the advantage of allowing one to convert more familiar tools such as mean-field, Hartree-Fock, and real-space renormalization-group approximation, which are by their very nature gauge-noninvariant methods, into fully gauge-invariant techniques. We show that these methods apply in the same way to both Abelian and non-Abelian theories, and that they are at least powerful enough to describe correctly the physics of periodic quantum electrodynamics (PQED) in (2+1) and (3+1) space-time dimensions. This paper formulates the problem for both Abelian and non-Abelian theories and shows how to reduce the Rayleigh-Ritz problem to that of computing the partition function of a classical spin system. We discuss the evaluation of the effective spin problem which one derives the PQED and then discuss ways of carrying out the evaluation of the partition function for the system equivalent to a non-Abelian theory. The explicit form of the effective partition function for the non-Abelian theory is derived, but because the evaluation of this function is considerably more complicated than the one derived in the Abelian theory no explicit evaluation of this function is presented. However, by comparing the gauge-projected Hartree-Fock wave function for PQED with that of the pure SU(2) gauge theory, we are able to show that extremely interesting differences emerge between these theories even at this simple level. We close with a discussion of fermions and a discussion of how one can extend these ideas to allow the computation of the glueball and hadron spectrum
Directory of Open Access Journals (Sweden)
Ai-Min Yang
2014-01-01
Full Text Available The local fractional Laplace variational iteration method was applied to solve the linear local fractional partial differential equations. The local fractional Laplace variational iteration method is coupled by the local fractional variational iteration method and Laplace transform. The nondifferentiable approximate solutions are obtained and their graphs are also shown.
Directory of Open Access Journals (Sweden)
Mehmet Tarik Atay
2013-01-01
Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.
Directory of Open Access Journals (Sweden)
Qinghui Du
2014-01-01
Full Text Available We consider semi-implicit Euler methods for stochastic age-dependent capital system with variable delays and random jump magnitudes, and investigate the convergence of the numerical approximation. It is proved that the numerical approximate solutions converge to the analytical solutions in the mean-square sense under given conditions.
International Nuclear Information System (INIS)
Kuosmanen, Timo
2012-01-01
Electricity distribution network is a prime example of a natural local monopoly. In many countries, electricity distribution is regulated by the government. Many regulators apply frontier estimation techniques such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) as an integral part of their regulatory framework. While more advanced methods that combine nonparametric frontier with stochastic error term are known in the literature, in practice, regulators continue to apply simplistic methods. This paper reports the main results of the project commissioned by the Finnish regulator for further development of the cost frontier estimation in their regulatory framework. The key objectives of the project were to integrate a stochastic SFA-style noise term to the nonparametric, axiomatic DEA-style cost frontier, and to take the heterogeneity of firms and their operating environments better into account. To achieve these objectives, a new method called stochastic nonparametric envelopment of data (StoNED) was examined. Based on the insights and experiences gained in the empirical analysis using the real data of the regulated networks, the Finnish regulator adopted the StoNED method in use from 2012 onwards.
International Nuclear Information System (INIS)
Klauder, J.R.
1983-01-01
The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)
Free vibration of finite cylindrical shells by the variational method
International Nuclear Information System (INIS)
Campen, D.H. van; Huetink, J.
1975-01-01
The calculation of the free vibrations of circular cylindrical shells of finite length has been of engineer's interest for a long time. The motive for the present calculations originates from a particular type of construction at the inlet of a sodium heated superheater with helix heating bundle for SNR-Kalkar. The variational analysis is based on a modified energy functional for cylindrical shells, proposed by Koiter and resulting in Morley's equilibrium equations. As usual, the dispacement amplitude is assumed to be distributed harmonically in the circumferential direction of the shell. Following the method of Gontkevich, the dependence between the displacements of the shell middle surface and the axial shell co-ordinate is expressed approximately by a set of eigenfunctions of a free vibrating beam satisfying the desired boundary conditions. Substitution of this displacement expression into the virtual work equation for the complete shell leads to a characteristic equation determining the natural frequencies. The calculations are carried out for a clamped-clamped and a clamped-free cylinder. A comparison is given between the above numerical results and experimental and theoretical results from literature. In addition, the influence of surrounding fluid mass on the above frequencies is analysed for a clamped-clamped shell. The solution for the velocity potential used in this case differs from the solutions used in literature until now in that not only travelling waves in the axial direction are considered. (Auth.)
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
Variational method for infinite nuclear matter with noncentral forces
International Nuclear Information System (INIS)
Takano, M.; Yamada, M.
1998-01-01
Approximate energy expressions are proposed for infinite zero-temperature nuclear matter by taking into account noncentral forces. They are explicitly expressed as functionals of spin- (isospin-) dependent radial distribution functions, tensor distribution functions and spin-orbit distribution functions, and can be used conveniently in the variational method. A notable feature of these expressions is that they automatically guarantee the necessary conditions on the spin-isospin-dependent structure functions. The Euler-Lagrange equations are derived from these energy expressions and numerically solved for neutron matter and symmetric nuclear matter. The results show that the noncentral forces bring down the total energies too much with too dense saturation densities. Since the main reason for these undesirable results seems to be the long tails of the noncentral distribution functions, an effective theory is proposed by introducing a density-dependent damping function into the noncentral potentials to suppress the long tails of the non-central distribution functions. By adjusting the value of a parameter included in the damping function, we can reproduce the saturation point (both the energy and density) of symmetric nuclear matter with the Hamada-Johnston potential. (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)
Directory of Open Access Journals (Sweden)
Jimeng Li
2016-01-01
Full Text Available The structure of mechanical equipment becomes increasingly complex, and tough environments under which it works often make bearings and gears subject to failure. However, effective extraction of useful feature information submerged in strong noise that is indicative of structural defects has remained a major challenge. Therefore, an adaptive multiscale noise control enhanced stochastic resonance (SR method based on modified ensemble empirical mode decomposition (EEMD for mechanical fault diagnosis is proposed in the paper. According to the oscillation characteristics of signal itself, the algorithm of modified EEMD can adaptively decompose the fault signals into different scales and it reduces the decomposition levels to improve calculation efficiency of the proposed method. Through filter processing with the constructed filters, the orthogonality of adjacent intrinsic mode functions (IMFs can be improved, which is conducive to enhancing the extraction of weak features from strong noise. The constructed signal obtained by using IMFs is inputted into the SR system, and the noise control parameter of different scales is optimized and selected with the help of the genetic algorithm, thus achieving the enhancement extraction of weak features. Finally, simulation experiments and engineering application of bearing fault diagnosis demonstrate the effectiveness and feasibility of the proposed method.
A retrodictive stochastic simulation algorithm
International Nuclear Information System (INIS)
Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.
2010-01-01
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
International Nuclear Information System (INIS)
Zhu, Zhiwen; Zhang, Qingxin; Xu, Jia
2014-01-01
Stochastic bifurcation and fractal and chaos control of a giant magnetostrictive film–shape memory alloy (GMF–SMA) composite cantilever plate subjected to in-plane harmonic and stochastic excitation were studied. Van der Pol items were improved to interpret the hysteretic phenomena of both GMF and SMA, and the nonlinear dynamic model of a GMF–SMA composite cantilever plate subjected to in-plane harmonic and stochastic excitation was developed. The probability density function of the dynamic response of the system was obtained, and the conditions of stochastic Hopf bifurcation were analyzed. The conditions of noise-induced chaotic response were obtained in the stochastic Melnikov integral method, and the fractal boundary of the safe basin of the system was provided. Finally, the chaos control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that stochastic Hopf bifurcation and chaos appear in the parameter variation process. The boundary of the safe basin of the system has fractal characteristics, and its area decreases when the noise intensifies. The system reliability was improved through stochastic optimal control, and the safe basin area of the system increased
International Nuclear Information System (INIS)
Cheng, Wen-Long; Huang, Yong-Hua; Liu, Na; Ma, Ran
2012-01-01
Thermal conductivity is a key parameter for evaluating wellbore heat losses which plays an important role in determining the efficiency of steam injection processes. In this study, an unsteady formation heat-transfer model was established and a cost-effective in situ method by using stochastic approximation method based on well-log temperature data was presented. The proposed method was able to estimate the thermal conductivity and the volumetric heat capacity of geological formation simultaneously under the in situ conditions. The feasibility of the present method was assessed by a sample test, the results of which shown that the thermal conductivity and the volumetric heat capacity could be obtained with the relative errors of −0.21% and −0.32%, respectively. In addition, three field tests were conducted based on the easily obtainable well-log temperature data from the steam injection wells. It was found that the relative errors of thermal conductivity for the three field tests were within ±0.6%, demonstrating the excellent performance of the proposed method for calculating thermal conductivity. The relative errors of volumetric heat capacity ranged from −6.1% to −14.2% for the three field tests. Sensitivity analysis indicated that this was due to the low correlation between the volumetric heat capacity and the wellbore temperature, which was used to generate the judgment criterion. -- Highlights: ► A cost-effective in situ method for estimating thermal properties of formation was presented. ► Thermal conductivity and volumetric heat capacity can be estimated simultaneously by the proposed method. ► The relative error of thermal conductivity estimated was within ±0.6%. ► Sensitivity analysis was conducted to study the estimated results of thermal properties.
International Nuclear Information System (INIS)
Kawashima, N.; Katori, M.; Tsallis, C.; Suzuki, M.
1989-01-01
A general procedure to study critical phenomena of magnetic systems is discussed. It consists of systematic series of Landau-like approximations (Extended Variational Method) and the coherent-anomaly method (CAM). As for susceptibility, the present method is equivalent to the power-series CAM theory. On the other hand, the EVM gives a set of new approximants for other physical quantities. Applications to d-dimensional Ising ferromagnets are also described. The critical points and exponents are estimated with high accuracy. (author) [pt
Furihata, Daisuke
2010-01-01
Nonlinear Partial Differential Equations (PDEs) have become increasingly important in the description of physical phenomena. Unlike Ordinary Differential Equations, PDEs can be used to effectively model multidimensional systems. The methods put forward in Discrete Variational Derivative Method concentrate on a new class of ""structure-preserving numerical equations"" which improves the qualitative behaviour of the PDE solutions and allows for stable computing. The authors have also taken care to present their methods in an accessible manner, which means that the book will be useful to engineer
Gekeler, Simon
2016-01-01
The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware. Bionic Optimization means finding the best solution to a problem using methods found in nature. As Evolutionary Strategies and Particle Swarm Optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them. A set of sample applications shows how Bionic Optimization works in practice. From academic studies on simple fra...
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
the good performance of these schemes. In [4], we study spectral collocation methods for functions which are analytic in the open interval but have...the detailed detonation struc- ture. The efficient parallel AMR-WENO method provides a good tool for these detonation simulations. In [10], a...with his students a few years ago. This method has now found a wide usage in applications. In [11], we give a stability analysis, using both the GKS
Conservative diffusions: a constructive approach to Nelson's stochastic mechanics
International Nuclear Information System (INIS)
Carlen, E.A.
1984-01-01
In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. Concern here is with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: ''Do the diffusion of stochastic mechanics - which are formally given by stochastic differential equations with extremely singular coefficients - really exist.'' Given that they exist, one can ask, ''Do these diffusions have physically reasonable paths to study the behavior of physical systems.'' These are the questions treated in this thesis. In Chapter 1, stochastic mechanics and diffusion theory are reviewed, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. Chapter II settles the first of the questions raised above. Using PDE methods, the diffusions of stochastic mechanics are constructed. The result is sufficiently general to be of independent mathematical interest. In Chapter III, potential scattering in stochastic mechanics is treated and direct probabilistic methods of studying quantum scattering problems are discussed. The results provide a solid YES in answer to the second question raised above
Stochastic volatility of volatility in continuous time
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Veraart, Almut
This paper introduces the concept of stochastic volatility of volatility in continuous time and, hence, extends standard stochastic volatility (SV) models to allow for an additional source of randomness associated with greater variability in the data. We discuss how stochastic volatility...... of volatility can be defined both non-parametrically, where we link it to the quadratic variation of the stochastic variance process, and parametrically, where we propose two new SV models which allow for stochastic volatility of volatility. In addition, we show that volatility of volatility can be estimated...
STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...
African Journals Online (AJOL)
eobe
STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.
Directory of Open Access Journals (Sweden)
Khairul Salleh Basaruddin
Full Text Available Randomness in the microstructure due to variations in microscopic properties and geometrical information is used to predict the stochastically homogenised properties of cellular media. Two stochastic problems at the micro-scale level that commonly occur due to fabrication inaccuracies, degradation mechanisms or natural heterogeneity were analysed using a stochastic homogenisation method based on a first-order perturbation. First, the influence of Young's modulus variation in an adhesive on the macroscopic properties of an aluminium-adhesive honeycomb structure was investigated. The fluctuations in the microscopic properties were then combined by varying the microstructure periodicity in a corrugated-core sandwich plate to obtain the variation of the homogenised property. The numerical results show that the uncertainties in the microstructure affect the dispersion of the homogenised property. These results indicate the importance of the presented stochastic multi-scale analysis for the design and fabrication of cellular solids when considering microscopic random variation.
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
International Nuclear Information System (INIS)
Hora, H.; Aydin, M.
1992-01-01
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Energy Technology Data Exchange (ETDEWEB)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
Variational methods and effective actions in string models
International Nuclear Information System (INIS)
Dereli, T.; Tucker, R.W.
1987-01-01
Effective actions motivated by zero-order and first-order actions are examined. Particular attention is devoted to a variational procedure that is consistent with the structure equations involving the Lorentz connection. Attention is drawn to subtleties that can arise in varying higher-order actions and an efficient procedure developed to handle these cases using the calculus of forms. The effect of constrained variations on the field equations is discussed. (author)
International Nuclear Information System (INIS)
Kist, Tarso B.L.; Orszag, M.; Davidovich, L.
1997-01-01
The dynamics of open system is frequently modeled in terms of a small system S coupled to a reservoir R, the last having an infinitely larger number of degree of freedom than S. Usually the dynamics of the S variables may be of interest, which can be studied using either Langevin equations, or master equations, or yet the path integral formulation. Useful alternatives for the master equation method are the Monte Carlo Wave-function method (MCWF), and Stochastic Schroedinger Equations (SSE's). The methods MCWF and SSE's recently experienced a fast development both in their theoretical background and applications to the study of the dissipative quantum systems dynamics in quantum optics. Even though these alternatives can be shown to be formally equivalent to the master equation approach, they are often regarded as mathematical tricks, with no relation to a concrete physical evolution of the system. The advantage of using them is that one has to deal with state vectors, instead of density matrices, thus reducing the total amount of matrix elements to be calculated. In this work, we consider the possibility of giving a physical interpretation to these methods, in terms of continuous measurements made on the evolving system. We show that physical realizations of the two methods are indeed possible, for a mode of the electromagnetic field in a cavity interacting with a continuum of modes corresponding to the field outside the cavity. Two schemes are proposed, consisting of a mode of the electromagnetic field interacting with a beam of Rydberg two-level atoms. In these schemes, the field mode plays the role of a small system and the atomic beam plays the role of a reservoir (infinitely larger number of degrees of freedom at finite temperature, the interaction between them being given by the Jaynes-Cummings model
Developments based on stochastic and determinist methods for studying complex nuclear systems
International Nuclear Information System (INIS)
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Maximal stochastic transport in the Lorenz equations
Energy Technology Data Exchange (ETDEWEB)
Agarwal, Sahil, E-mail: sahil.agarwal@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Wettlaufer, J.S., E-mail: john.wettlaufer@yale.edu [Program in Applied Mathematics, Yale University, New Haven (United States); Departments of Geology & Geophysics, Mathematics and Physics, Yale University, New Haven (United States); Mathematical Institute, University of Oxford, Oxford (United Kingdom); Nordita, Royal Institute of Technology and Stockholm University, Stockholm (Sweden)
2016-01-08
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh–Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
Energy Technology Data Exchange (ETDEWEB)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
2018-01-01
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advective dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.
Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method
Directory of Open Access Journals (Sweden)
Sergiu SECHEL
2017-01-01
Full Text Available The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed de-bugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk charac-teristics (risk attributes and evaluate the results after 1.5 million rounds of Monte Carlo simu-lations.
Directory of Open Access Journals (Sweden)
Oscar Eduardo Gualdron
2014-12-01
Full Text Available One of the principal inconveniences that analysis and information processing presents is that of the representation of dataset. Normally, one encounters a high number of samples, each one with thousands of variables, and in many cases with irrelevant information and noise. Therefore, in order to represent findings in a clearer way, it is necessary to reduce the amount of variables. In this paper, a novel variable selection technique for multivariable data analysis, inspired on stochastic methods and designed to work with support vector machines (SVM, is described. The approach is demonstrated in a food application involving the detection of adulteration of olive oil (more expensive with hazelnut oil (cheaper. Fingerprinting by H NMR spectroscopy was used to analyze the different samples. Results show that it is possible to reduce the number of variables without affecting classification results.
2016-05-11
REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) AOARD UNIT 45002 APO AP 96338-5002 10. SPONSOR/MONITOR’S ACRONYM(S) AFRL/AFOSR IOA...orders of magnitude less memory than our competitors . We implemented our methods on MAPREDUCE with two widely-applicable optimization techniques...local disk caching and greedy row assignment. They speeded up our methods up to 98.2x and also the competitors up to 5.9x. 15. SUBJECT TERMS 16
The complexity of interior point methods for solving discounted turn-based stochastic games
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus
2013-01-01
for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
A Stochastic Multiscale Method for the Elastic Wave Equations Arising from Fiber Composites
Babuska, Ivo; Motamed, Mohammad; Tempone, Raul
2016-01-01
. The method aims at approximating statistical moments of some given quantities of interest, such as stresses, in regions of relatively small size, e.g. hot spots or zones that are deemed vulnerable to failure. For a fiber-reinforced cross-plied laminate, we
Markov stochasticity coordinates
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Markov stochasticity coordinates
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
2017-01-15
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Tan, Chee Wei; Green, Tim C.; Hernandez-Aramburo, Carlos A.
2010-01-01
This paper presents a stochastic simulation using Monte Carlo technique to size a battery to meet dual objectives of demand shift at peak electricity cost times and outage protection in BIPV (building integrated photovoltaic) systems. Both functions require battery storage and the sizing of battery using numerical optimization is popularly used. However, the weather conditions, outage events and demand peaks are not deterministic in nature. Therefore, the sizing of battery storage capacity should also be based on a probabilistic approach. The Monte Carlo simulation is a rigorous method to sizing BIPV system as it takes into account a real building load profiles, the weather information and the local historical outage distribution. The simulation is split into seasonal basis for the analysis of demand shifting and outage events in order to match the seasonal weather conditions and load profiles. Five configurations of PV (photovoltaic) are assessed that cover different areas and orientations. The simulation output includes the predicted PV energy yield, the amount of energy required for demand management and outage event. Therefore, consumers can base sizing decisions on the historical data and local risk of outage statistics and the success rate of meeting the demand shift required. Finally, the economic evaluations together with the sensitivity analysis and the assessment of customers' outage cost are discussed.
Energy Technology Data Exchange (ETDEWEB)
Zhang Guangjun [State Key Laboratory of Mechanical Structural Strength and Vibration, School of Architectural Engineering and Mechanics, Xi' an Jiaotong University, Xi' an, Shaanxi (China); Xu Jianxue [State Key Laboratory of Mechanical Structural Strength and Vibration, School of Architectural Engineering and Mechanics, Xi' an Jiaotong University, Xi' an, Shaanxi (China)] e-mail: jxxu@mail.xjtu.edu.cn
2006-02-01
This paper analyzes the stochastic resonance induced by a novel transition of one-dimensional bistable system in the neighborhood of bifurcation point with the method of moment, which refer to the transition of system motion among a potential well of stable fixed point before bifurcation of original system and double-well potential of two coexisting stable fixed points after original system bifurcation at the presence of internal noise. The results show: the semi-analytical result of stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point may be obtained, and the semi-analytical result is in accord with the one of Monte Carlo simulation qualitatively, the occurrence of stochastic resonance is related to the bifurcation of noisy nonlinear dynamical system moment equations, which induce the transfer of energy of ensemble average (Ex) of system response in each frequency component and make the energy of ensemble average of system response concentrate on the frequency of input signal, stochastic resonance occurs.
International Nuclear Information System (INIS)
Zhang Guangjun; Xu Jianxue
2006-01-01
This paper analyzes the stochastic resonance induced by a novel transition of one-dimensional bistable system in the neighborhood of bifurcation point with the method of moment, which refer to the transition of system motion among a potential well of stable fixed point before bifurcation of original system and double-well potential of two coexisting stable fixed points after original system bifurcation at the presence of internal noise. The results show: the semi-analytical result of stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point may be obtained, and the semi-analytical result is in accord with the one of Monte Carlo simulation qualitatively, the occurrence of stochastic resonance is related to the bifurcation of noisy nonlinear dynamical system moment equations, which induce the transfer of energy of ensemble average (Ex) of system response in each frequency component and make the energy of ensemble average of system response concentrate on the frequency of input signal, stochastic resonance occurs
Chang, Mou-Hsiung
2015-01-01
The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2005-01-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
Directory of Open Access Journals (Sweden)
Mellah HACEN
2012-08-01
Full Text Available The induction machine, because of its robustness and low-cost, is commonly used in the industry. Nevertheless, as every type of electrical machine, this machine suffers of some limitations. The most important one is the working temperature which is the dimensioning parameter for the definition of the nominal working point and the machine lifetime. Due to a strong demand concerning thermal monitoring methods appeared in the industry sector. In this context, the adding of temperature sensors is not acceptable and the studied methods tend to use sensorless approaches such as observators or parameters estimators like the extended Kalman Filter (EKF. Then the important criteria are reliability, computational cost ad real time implementation.
Stochastic ferromagnetism analysis and numerics
Brzezniak, Zdzislaw; Neklyudov, Mikhail; Prohl, Andreas
2013-01-01
This monograph examines magnetization dynamics at elevated temperatures which can be described by the stochastic Landau-Lifshitz-Gilbert equation (SLLG). Comparative computational studies with the stochastic model are included. Constructive tools such as e.g. finite element methods are used to derive the theoretical results, which are then used for computational studies.
Malliavin Calculus With Applications to Stochastic Partial Differential Equations
Sanz-Solé, Marta
2005-01-01
Developed in the 1970s to study the existence and smoothness of density for the probability laws of random vectors, Malliavin calculus--a stochastic calculus of variation on the Wiener space--has proven fruitful in many problems in probability theory, particularly in probabilistic numerical methods in financial mathematics.This book presents applications of Malliavin calculus to the analysis of probability laws of solutions to stochastic partial differential equations driven by Gaussian noises that are white in time and coloured in space. The first five chapters introduce the calculus itself
Analysis of spin and gauge models with variational methods
International Nuclear Information System (INIS)
Dagotto, E.; Masperi, L.; Moreo, A.; Della Selva, A.; Fiore, R.
1985-01-01
Since independent-site (link) or independent-link (plaquette) variational states enhance the order or the disorder, respectively, in the treatment of spin (gauge) models, we prove that mixed states are able to improve the critical coupling while giving the qualitatively correct behavior of the relevant parameters
Perturbative vs. variational methods in the study of carbon nanotubes
DEFF Research Database (Denmark)
Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin
2007-01-01
Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement at suffic...
Variational method for inverting the Kohn-Sham procedure
International Nuclear Information System (INIS)
Kadantsev, Eugene S.; Stott, M.J.
2004-01-01
A procedure based on a variational principle is developed for determining the local Kohn-Sham (KS) potential corresponding to a given ground-state electron density. This procedure is applied to calculate the exchange-correlation part of the effective Kohn-Sham (KS) potential for the neon atom and the methane molecule
Tam, Vincent H; Kabbara, Samer
2006-10-01
Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.
The cross-section dividing method and a stochastic interpretation of the moliere expansion
International Nuclear Information System (INIS)
Nakatsuka, T.; Okei, K.
2004-01-01
Properties of Moliere scattering process are investigated through the cross-section dividing method. We divide the single-scattering at an adequate angle into the moderate scattering and the large-angle scattering. We have found the expansion parameter or the shape parameter B of Moliere, which corresponds to the splitting angle of the single scattering at e B/2 times the screening angle, acts as the probability parameter to receive the large-angle scattering. A mathematical formulation to derive the angular distribution through the cross-section dividing method is proposed. Small distortions from the gaussian distribution were found in the central distribution produced by the moderate scattering of Moliere, due to the higher Fourier components. Smaller splitting angles than Moliere, e.g. the one-scattering angle χ C , will be effective for rapid sampling sequences of Moliere angular distribution, giving almost gaussian central distributions as the product of moderate scattering and low-frequent single-scatterings as the product of large-angle scatterings. (author)
Momentum Maps and Stochastic Clebsch Action Principles
Cruzeiro, Ana Bela; Holm, Darryl D.; Ratiu, Tudor S.
2018-01-01
We derive stochastic differential equations whose solutions follow the flow of a stochastic nonlinear Lie algebra operation on a configuration manifold. For this purpose, we develop a stochastic Clebsch action principle, in which the noise couples to the phase space variables through a momentum map. This special coupling simplifies the structure of the resulting stochastic Hamilton equations for the momentum map. In particular, these stochastic Hamilton equations collectivize for Hamiltonians that depend only on the momentum map variable. The Stratonovich equations are derived from the Clebsch variational principle and then converted into Itô form. In comparing the Stratonovich and Itô forms of the stochastic dynamical equations governing the components of the momentum map, we find that the Itô contraction term turns out to be a double Poisson bracket. Finally, we present the stochastic Hamiltonian formulation of the collectivized momentum map dynamics and derive the corresponding Kolmogorov forward and backward equations.
Tutu, Hiroki
2011-06-01
Stochastic resonance (SR) enhanced by time-delayed feedback control is studied. The system in the absence of control is described by a Langevin equation for a bistable system, and possesses a usual SR response. The control with the feedback loop, the delay time of which equals to one-half of the period (2π/Ω) of the input signal, gives rise to a noise-induced oscillatory switching cycle between two states in the output time series, while its average frequency is just smaller than Ω in a small noise regime. As the noise intensity D approaches an appropriate level, the noise constructively works to adapt the frequency of the switching cycle to Ω, and this changes the dynamics into a state wherein the phase of the output signal is entrained to that of the input signal from its phase slipped state. The behavior is characterized by power loss of the external signal or response function. This paper deals with the response function based on a dichotomic model. A method of delay-coordinate series expansion, which reduces a non-Markovian transition probability flux to a series of memory fluxes on a discrete delay-coordinate system, is proposed. Its primitive implementation suggests that the method can be a potential tool for a systematic analysis of SR phenomenon with delayed feedback loop. We show that a D-dependent behavior of poles of a finite Laplace transform of the response function qualitatively characterizes the structure of the power loss, and we also show analytical results for the correlation function and the power spectral density.
Colour based fire detection method with temporal intensity variation filtration
Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.
2015-02-01
Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.
Colour based fire detection method with temporal intensity variation filtration
International Nuclear Information System (INIS)
Trambitckii, K; Musalimov, V; Anding, K; Linß, G
2015-01-01
Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library
Some new mathematical methods for variational objective analysis
Wahba, Grace; Johnson, Donald R.
1994-01-01
Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.
International Nuclear Information System (INIS)
Bisognano, J.; Leemann, C.
1982-03-01
Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron
Sequential neural models with stochastic layers
DEFF Research Database (Denmark)
Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich
2016-01-01
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...
Stochastic modelling of turbulence
DEFF Research Database (Denmark)
Sørensen, Emil Hedevang Lohse
previously been shown to be closely connected to the energy dissipation. The incorporation of the small scale dynamics into the spatial model opens the door to a fully fledged stochastic model of turbulence. Concerning the interaction of wind and wind turbine, a new method is proposed to extract wind turbine...
Stochastic Control - External Models
DEFF Research Database (Denmark)
Poulsen, Niels Kjølstad
2005-01-01
This note is devoted to control of stochastic systems described in discrete time. We are concerned with external descriptions or transfer function model, where we have a dynamic model for the input output relation only (i.e.. no direct internal information). The methods are based on LTI systems...
Variational methods for crystalline microstructure analysis and computation
Dolzmann, Georg
2003-01-01
Phase transformations in solids typically lead to surprising mechanical behaviour with far reaching technological applications. The mathematical modeling of these transformations in the late 80s initiated a new field of research in applied mathematics, often referred to as mathematical materials science, with deep connections to the calculus of variations and the theory of partial differential equations. This volume gives a brief introduction to the essential physical background, in particular for shape memory alloys and a special class of polymers (nematic elastomers). Then the underlying mathematical concepts are presented with a strong emphasis on the importance of quasiconvex hulls of sets for experiments, analytical approaches, and numerical simulations.
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
Perfect Form: Variational Principles, Methods, and Applications in Elementary Physics
International Nuclear Information System (INIS)
Isenberg, C
1997-01-01
This short book is concerned with the physical applications of variational principles of the calculus. It is intended for undergraduate students who have taken some introductory lectures on the subject and have been exposed to Lagrangian and Hamiltonian mechanics. Throughout the book the author emphasizes the historical background to the subject and provides numerous problems, mainly from the fields of mechanics and optics. Some of these problems are provided with an answer, while others, regretfully, are not. It would have been an added help to the undergraduate reader if complete solutions could have been provided in an appendix. The introductory chapter is concerned with Fermat's Principle and image formation. This is followed by the derivation of the Euler - Lagrange equation. The third chapter returns to the subject of optical paths without making the link with a mechanical variational principle - that comes later. Chapters on the subjects of minimum potential energy, least action and Hamilton's principle follow. This volume provides an 'easy read' for a student keen to learn more about the subject. It is well illustrated and will make a useful addition to all undergraduate physics libraries. (book review)
Tzong-Shi Lu; Szu-Yu Yiao; Kenneth Lim; Roderick V. Jensen; Li-Li Hsiao
2010-01-01
Background: The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. Aims: We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. Material & Methods: Differential protein expression patterns was assessed by western bl...
Iterative method of the parameter variation for solution of nonlinear functional equations
International Nuclear Information System (INIS)
Davidenko, D.F.
1975-01-01
The iteration method of parameter variation is used for solving nonlinear functional equations in Banach spaces. The authors consider some methods for numerical integration of ordinary first-order differential equations and construct the relevant iteration methods of parameter variation, both one- and multifactor. They also discuss problems of mathematical substantiation of the method, study the conditions and rate of convergence, estimate the error. The paper considers the application of the method to specific functional equations
Directory of Open Access Journals (Sweden)
Yonghan Choi
2014-01-01
Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.
Stochastic optimization methods
Marti, Kurt
2008-01-01
Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.
Czech Academy of Sciences Publication Activity Database
Lánský, Petr; Ditlevsen, S.
2008-01-01
Roč. 99, 4-5 (2008), s. 253-262 ISSN 0340-1200 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic diffusion neuronal model Subject RIV: BO - Biophysics Impact factor: 1.935, year: 2008
International Nuclear Information System (INIS)
Lino, A.T.; Takahashi, E.K.; Leite, J.R.; Ferraz, A.C.
1988-01-01
The band structure of metallic sodium is calculated, using for the first time the self-consistent field variational cellular method. In order to implement the self-consistency in the variational cellular theory, the crystal electronic charge density was calculated within the muffin-tin approximation. The comparison between our results and those derived from other calculations leads to the conclusion that the proposed self-consistent version of the variational cellular method is fast and accurate. (author) [pt
Research on nonlinear stochastic dynamical price model
International Nuclear Information System (INIS)
Li Jiaorui; Xu Wei; Xie Wenxian; Ren Zhengzheng
2008-01-01
In consideration of many uncertain factors existing in economic system, nonlinear stochastic dynamical price model which is subjected to Gaussian white noise excitation is proposed based on deterministic model. One-dimensional averaged Ito stochastic differential equation for the model is derived by using the stochastic averaging method, and applied to investigate the stability of the trivial solution and the first-passage failure of the stochastic price model. The stochastic price model and the methods presented in this paper are verified by numerical studies
Directory of Open Access Journals (Sweden)
Hao Yu
2016-12-01
Full Text Available Today, the increased public concern about sustainable development and more stringent environmental regulations have become important driving forces for value recovery from end-of-life and end-of use products through reverse logistics. Waste electrical and electronic equipment (WEEE contains both valuable components that need to be recycled and hazardous substances that have to be properly treated or disposed of, so the design of a reverse logistics system for sustainable treatment of WEEE is of paramount importance. This paper presents a stochastic mixed integer programming model for designing and planning a generic multi-source, multi-echelon, capacitated, and sustainable reverse logistics network for WEEE management under uncertainty. The model takes into account both economic efficiency and environmental impacts in decision-making, and the environmental impacts are evaluated in terms of carbon emissions. A multi-criteria two-stage scenario-based solution method is employed and further developed in this study for generating the optimal solution for the stochastic optimization problem. The proposed model and solution method are validated through a numerical experiment and sensitivity analyses presented later in this paper, and an analysis of the results is also given to provide a deep managerial insight into the application of the proposed stochastic optimization model.
Crisan, Dan
2011-01-01
"Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa
An application of information theory to stochastic classical gravitational fields
Angulo, J.; Angulo, J. C.; Angulo, J. M.
2018-06-01
The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.
Borodin, Andrei N
2017-01-01
This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.
Directory of Open Access Journals (Sweden)
Wu Guo-Cheng
2012-01-01
Full Text Available This note presents a Laplace transform approach in the determination of the Lagrange multiplier when the variational iteration method is applied to time fractional heat diffusion equation. The presented approach is more straightforward and allows some simplification in application of the variational iteration method to fractional differential equations, thus improving the convergence of the successive iterations.
Directory of Open Access Journals (Sweden)
Jakob H Lagerlöf
Full Text Available To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution.A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM and an individual tree method (ITM. Five tumour sub-sections were compared, to evaluate the methods.The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02 than the distributions of different samples using CTM (0.001< RMSD<0.01. The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS tests showed that millimetre-scale samples may not represent the whole.The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.
Variational, projection methods and Pade approximants in scattering theory
International Nuclear Information System (INIS)
Turchetti, G.
1980-12-01
Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt
Blanchard, Philippe
2015-01-01
The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas. The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories. All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods. The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...
Stochastic quantization and gauge invariance
International Nuclear Information System (INIS)
Viana, R.L.
1987-01-01
A survey of the fundamental ideas about Parisi-Wu's Stochastic Quantization Method, with applications to Scalar, Gauge and Fermionic theories, is done. In particular, the Analytic Stochastic Regularization Scheme is used to calculate the polarization tensor for Quantum Electrodynamics with Dirac bosons or Fermions. The regularization influence is studied for both theories and an extension of this method for some supersymmetrical models is suggested. (author)
A variational Bayesian method to inverse problems with impulsive noise
Jin, Bangti
2012-01-01
We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve
Directory of Open Access Journals (Sweden)
R. Darzi
2010-01-01
Full Text Available We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.
Darzi R; Neamaty A
2010-01-01
We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.
Stochastic Reachability Analysis of Hybrid Systems
Bujorianu, Luminita Manuela
2012-01-01
Stochastic reachability analysis (SRA) is a method of analyzing the behavior of control systems which mix discrete and continuous dynamics. For probabilistic discrete systems it has been shown to be a practical verification method but for stochastic hybrid systems it can be rather more. As a verification technique SRA can assess the safety and performance of, for example, autonomous systems, robot and aircraft path planning and multi-agent coordination but it can also be used for the adaptive control of such systems. Stochastic Reachability Analysis of Hybrid Systems is a self-contained and accessible introduction to this novel topic in the analysis and development of stochastic hybrid systems. Beginning with the relevant aspects of Markov models and introducing stochastic hybrid systems, the book then moves on to coverage of reachability analysis for stochastic hybrid systems. Following this build up, the core of the text first formally defines the concept of reachability in the stochastic framework and then...
DEFF Research Database (Denmark)
Ding, Tao; Yang, Qingrun; Yang, Yongheng
2018-01-01
To address the uncertain output of distributed generators (DGs) for reactive power optimization in active distribution networks, the stochastic programming model is widely used. The model is employed to find an optimal control strategy with minimum expected network loss while satisfying all......, in this paper, a data-driven modeling approach is introduced to assume that the probability distribution from the historical data is uncertain within a confidence set. Furthermore, a data-driven stochastic programming model is formulated as a two-stage problem, where the first-stage variables find the optimal...... control for discrete reactive power compensation equipment under the worst probability distribution of the second stage recourse. The second-stage variables are adjusted to uncertain probability distribution. In particular, this two-stage problem has a special structure so that the second-stage problem...
VARIATIONS OF THE ENERGY METHOD FOR STUDYING CONSTRUCTION STABILITY
Directory of Open Access Journals (Sweden)
A. M. Dibirgadzhiev
2017-01-01
Full Text Available Objectives. The aim of the work is to find the most rational form of expression of the potential energy of a nonlinear system with the subsequent use of algebraic means and geometric images of catastrophe theory for studying the behaviour of a construction under load. Various forms of stability criteria for the equilibrium states of constructions are investigated. Some aspects of the using various forms of expression of the system’s total energy are considered, oriented to the subsequent use of the catastrophe theory methods for solving the nonlinear problems of construction calculation associated with discontinuous phenomena.Methods. According to the form of the potential energy expression, the mathematical description of the problem being solved is linked to a specific catastrophe of a universal character from the list of catastrophes. After this, the behaviour of the system can be predicted on the basis of the fundamental propositions formulated in catastrophe theory without integrating the corresponding system of nonlinear differential equations of high order in partial derivatives, to which the solution of such problems is reduced.Results. The result is presented in the form of uniform geometric images containing all the necessary qualitative and quantitative information about the deformation of whole construction classes under load for a wide range of changes in the values of external (control and internal (behavioural parameters.Conclusion. Methods based on catastrophe theory are an effective mathematical tool for solving non-linear boundary-value problems with parameters associated with discontinuous phenomena, which are poorly analysable by conventional methods. However, they have not yet received due attention from researchers, especially in the field of stability calculations, which remains a complex, relevant and attractive problem within structural mechanics. To solve a concrete nonlinear boundary value problem for calculating
Iterative and variational homogenization methods for filled elastomers
Goudarzi, Taha
Elastomeric composites have increasingly proved invaluable in commercial technological applications due to their unique mechanical properties, especially their ability to undergo large reversible deformation in response to a variety of stimuli (e.g., mechanical forces, electric and magnetic fields, changes in temperature). Modern advances in organic materials science have revealed that elastomeric composites hold also tremendous potential to enable new high-end technologies, especially as the next generation of sensors and actuators featured by their low cost together with their biocompatibility, and processability into arbitrary shapes. This potential calls for an in-depth investigation of the macroscopic mechanical/physical behavior of elastomeric composites directly in terms of their microscopic behavior with the objective of creating the knowledge base needed to guide their bottom-up design. The purpose of this thesis is to generate a mathematical framework to describe, explain, and predict the macroscopic nonlinear elastic behavior of filled elastomers, arguably the most prominent class of elastomeric composites, directly in terms of the behavior of their constituents --- i.e., the elastomeric matrix and the filler particles --- and their microstructure --- i.e., the content, size, shape, and spatial distribution of the filler particles. This will be accomplished via a combination of novel iterative and variational homogenization techniques capable of accounting for interphasial phenomena and finite deformations. Exact and approximate analytical solutions for the fundamental nonlinear elastic response of dilute suspensions of rigid spherical particles (either firmly bonded or bonded through finite size interphases) in Gaussian rubber are first generated. These results are in turn utilized to construct approximate solutions for the nonlinear elastic response of non-Gaussian elastomers filled with a random distribution of rigid particles (again, either firmly
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.